00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1754 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3015 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.009 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.010 using credential 00000000-0000-0000-0000-000000000002 00:00:00.013 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.027 Fetching changes from the remote Git repository 00:00:00.029 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.046 Using shallow fetch with depth 1 00:00:00.046 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.046 > git --version # timeout=10 00:00:00.064 > git --version # 'git version 2.39.2' 00:00:00.064 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.064 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.064 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.435 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.448 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.459 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:02.459 > git config core.sparsecheckout # timeout=10 00:00:02.469 > git read-tree -mu HEAD # timeout=10 00:00:02.486 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:02.508 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:02.508 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:02.665 [Pipeline] Start of Pipeline 00:00:02.680 [Pipeline] library 00:00:02.682 Loading library shm_lib@master 00:00:06.508 Library shm_lib@master is cached. Copying from home. 00:00:06.546 [Pipeline] node 00:00:06.631 Running on FCP07 in /var/jenkins/workspace/dsa-phy-autotest 00:00:06.634 [Pipeline] { 00:00:06.648 [Pipeline] catchError 00:00:06.649 [Pipeline] { 00:00:06.668 [Pipeline] wrap 00:00:06.678 [Pipeline] { 00:00:06.689 [Pipeline] stage 00:00:06.692 [Pipeline] { (Prologue) 00:00:06.923 [Pipeline] sh 00:00:07.208 + logger -p user.info -t JENKINS-CI 00:00:07.225 [Pipeline] echo 00:00:07.226 Node: FCP07 00:00:07.231 [Pipeline] sh 00:00:07.528 [Pipeline] setCustomBuildProperty 00:00:07.538 [Pipeline] echo 00:00:07.539 Cleanup processes 00:00:07.542 [Pipeline] sh 00:00:07.822 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:07.822 3214022 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:07.837 [Pipeline] sh 00:00:08.120 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:08.120 ++ grep -v 'sudo pgrep' 00:00:08.120 ++ awk '{print $1}' 00:00:08.120 + sudo kill -9 00:00:08.120 + true 00:00:08.139 [Pipeline] cleanWs 00:00:08.150 [WS-CLEANUP] Deleting project workspace... 00:00:08.150 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.156 [WS-CLEANUP] done 00:00:08.161 [Pipeline] setCustomBuildProperty 00:00:08.172 [Pipeline] sh 00:00:08.451 + sudo git config --global --replace-all safe.directory '*' 00:00:08.531 [Pipeline] nodesByLabel 00:00:08.533 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.543 [Pipeline] httpRequest 00:00:08.548 HttpMethod: GET 00:00:08.549 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:08.551 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:08.569 Response Code: HTTP/1.1 200 OK 00:00:08.569 Success: Status code 200 is in the accepted range: 200,404 00:00:08.569 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:28.336 [Pipeline] sh 00:00:28.621 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:28.640 [Pipeline] httpRequest 00:00:28.645 HttpMethod: GET 00:00:28.646 URL: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:28.646 Sending request to url: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:28.668 Response Code: HTTP/1.1 200 OK 00:00:28.669 Success: Status code 200 is in the accepted range: 200,404 00:00:28.669 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:01.457 [Pipeline] sh 00:01:01.740 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:04.303 [Pipeline] sh 00:01:04.578 + git -C spdk log --oneline -n5 00:01:04.578 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:04.578 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:04.578 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:04.578 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:04.578 3b33f4333 test/nvme/cuse: Fix typo 00:01:04.588 [Pipeline] } 00:01:04.599 [Pipeline] // stage 00:01:04.606 [Pipeline] stage 00:01:04.607 [Pipeline] { (Prepare) 00:01:04.623 [Pipeline] writeFile 00:01:04.640 [Pipeline] sh 00:01:04.942 + logger -p user.info -t JENKINS-CI 00:01:04.955 [Pipeline] sh 00:01:05.237 + logger -p user.info -t JENKINS-CI 00:01:05.248 [Pipeline] sh 00:01:05.529 + cat autorun-spdk.conf 00:01:05.529 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.529 SPDK_TEST_ACCEL_DSA=1 00:01:05.529 SPDK_TEST_ACCEL_IAA=1 00:01:05.529 SPDK_TEST_NVMF=1 00:01:05.529 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.529 SPDK_RUN_ASAN=1 00:01:05.529 SPDK_RUN_UBSAN=1 00:01:05.537 RUN_NIGHTLY=1 00:01:05.541 [Pipeline] readFile 00:01:05.565 [Pipeline] withEnv 00:01:05.567 [Pipeline] { 00:01:05.579 [Pipeline] sh 00:01:05.861 + set -ex 00:01:05.861 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:01:05.861 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:05.861 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.861 ++ SPDK_TEST_ACCEL_DSA=1 00:01:05.861 ++ SPDK_TEST_ACCEL_IAA=1 00:01:05.861 ++ SPDK_TEST_NVMF=1 00:01:05.861 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.861 ++ SPDK_RUN_ASAN=1 00:01:05.861 ++ SPDK_RUN_UBSAN=1 00:01:05.861 ++ RUN_NIGHTLY=1 00:01:05.861 + case $SPDK_TEST_NVMF_NICS in 00:01:05.861 + DRIVERS= 00:01:05.861 + [[ -n '' ]] 00:01:05.861 + exit 0 00:01:05.871 [Pipeline] } 00:01:05.885 [Pipeline] // withEnv 00:01:05.889 [Pipeline] } 00:01:05.901 [Pipeline] // stage 00:01:05.908 [Pipeline] catchError 00:01:05.909 [Pipeline] { 00:01:05.920 [Pipeline] timeout 00:01:05.921 Timeout set to expire in 50 min 00:01:05.922 [Pipeline] { 00:01:05.934 [Pipeline] stage 00:01:05.936 [Pipeline] { (Tests) 00:01:05.949 [Pipeline] sh 00:01:06.232 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:01:06.232 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:01:06.232 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:01:06.232 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:01:06.232 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:06.232 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:01:06.232 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:01:06.232 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:01:06.232 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:01:06.232 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:01:06.232 + cd /var/jenkins/workspace/dsa-phy-autotest 00:01:06.232 + source /etc/os-release 00:01:06.232 ++ NAME='Fedora Linux' 00:01:06.232 ++ VERSION='38 (Cloud Edition)' 00:01:06.232 ++ ID=fedora 00:01:06.232 ++ VERSION_ID=38 00:01:06.232 ++ VERSION_CODENAME= 00:01:06.232 ++ PLATFORM_ID=platform:f38 00:01:06.232 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:06.232 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:06.232 ++ LOGO=fedora-logo-icon 00:01:06.232 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:06.232 ++ HOME_URL=https://fedoraproject.org/ 00:01:06.232 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:06.232 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:06.232 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:06.232 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:06.232 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:06.232 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:06.232 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:06.232 ++ SUPPORT_END=2024-05-14 00:01:06.232 ++ VARIANT='Cloud Edition' 00:01:06.232 ++ VARIANT_ID=cloud 00:01:06.232 + uname -a 00:01:06.232 Linux spdk-fcp-07 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:06.232 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:01:08.146 Hugepages 00:01:08.146 node hugesize free / total 00:01:08.146 node0 1048576kB 0 / 0 00:01:08.146 node0 2048kB 0 / 0 00:01:08.146 node1 1048576kB 0 / 0 00:01:08.146 node1 2048kB 0 / 0 00:01:08.146 00:01:08.146 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:08.146 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:01:08.146 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:01:08.146 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:01:08.146 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:01:08.146 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:01:08.146 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:01:08.146 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:01:08.146 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:01:08.408 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:08.408 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:01:08.408 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:01:08.408 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:01:08.408 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:01:08.408 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:01:08.408 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:01:08.408 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:01:08.408 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:01:08.408 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:01:08.408 + rm -f /tmp/spdk-ld-path 00:01:08.408 + source autorun-spdk.conf 00:01:08.408 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.408 ++ SPDK_TEST_ACCEL_DSA=1 00:01:08.408 ++ SPDK_TEST_ACCEL_IAA=1 00:01:08.408 ++ SPDK_TEST_NVMF=1 00:01:08.408 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.408 ++ SPDK_RUN_ASAN=1 00:01:08.408 ++ SPDK_RUN_UBSAN=1 00:01:08.408 ++ RUN_NIGHTLY=1 00:01:08.408 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:08.408 + [[ -n '' ]] 00:01:08.408 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:08.408 + for M in /var/spdk/build-*-manifest.txt 00:01:08.408 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:08.408 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:08.408 + for M in /var/spdk/build-*-manifest.txt 00:01:08.408 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:08.408 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:08.408 ++ uname 00:01:08.408 + [[ Linux == \L\i\n\u\x ]] 00:01:08.408 + sudo dmesg -T 00:01:08.408 + sudo dmesg --clear 00:01:08.408 + dmesg_pid=3215553 00:01:08.408 + [[ Fedora Linux == FreeBSD ]] 00:01:08.408 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.408 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.408 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:08.408 + [[ -x /usr/src/fio-static/fio ]] 00:01:08.408 + export FIO_BIN=/usr/src/fio-static/fio 00:01:08.409 + FIO_BIN=/usr/src/fio-static/fio 00:01:08.409 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:08.409 + sudo dmesg -Tw 00:01:08.409 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:08.409 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:08.409 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.409 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.409 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:08.409 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.409 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.409 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:08.409 Test configuration: 00:01:08.409 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.409 SPDK_TEST_ACCEL_DSA=1 00:01:08.409 SPDK_TEST_ACCEL_IAA=1 00:01:08.409 SPDK_TEST_NVMF=1 00:01:08.409 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.409 SPDK_RUN_ASAN=1 00:01:08.409 SPDK_RUN_UBSAN=1 00:01:08.409 RUN_NIGHTLY=1 20:18:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:08.671 20:18:26 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:08.671 20:18:26 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:08.671 20:18:26 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:08.671 20:18:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.671 20:18:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.671 20:18:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.671 20:18:26 -- paths/export.sh@5 -- $ export PATH 00:01:08.671 20:18:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.671 20:18:26 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:08.671 20:18:26 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:08.671 20:18:26 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714155506.XXXXXX 00:01:08.671 20:18:26 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714155506.ZlWSeV 00:01:08.671 20:18:26 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:08.671 20:18:26 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:08.671 20:18:26 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:01:08.671 20:18:26 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:08.671 20:18:26 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:08.671 20:18:26 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:08.671 20:18:26 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:08.671 20:18:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.671 20:18:26 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:08.671 20:18:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.671 20:18:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.671 20:18:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:08.671 20:18:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.671 Fri Apr 26 06:18:26 PM UTC 2024 00:01:08.671 20:18:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.671 LTS-24-g36faa8c31 00:01:08.671 20:18:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:08.671 20:18:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:08.671 20:18:26 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:08.671 20:18:26 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:08.671 20:18:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.671 ************************************ 00:01:08.671 START TEST asan 00:01:08.671 ************************************ 00:01:08.671 20:18:26 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:08.671 using asan 00:01:08.671 00:01:08.671 real 0m0.000s 00:01:08.671 user 0m0.000s 00:01:08.671 sys 0m0.000s 00:01:08.671 20:18:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:08.671 20:18:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.671 ************************************ 00:01:08.671 END TEST asan 00:01:08.671 ************************************ 00:01:08.671 20:18:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.671 20:18:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.671 20:18:26 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:08.671 20:18:26 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:08.671 20:18:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.671 ************************************ 00:01:08.671 START TEST ubsan 00:01:08.671 ************************************ 00:01:08.671 20:18:26 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:08.671 using ubsan 00:01:08.671 00:01:08.671 real 0m0.000s 00:01:08.671 user 0m0.000s 00:01:08.671 sys 0m0.000s 00:01:08.671 20:18:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:08.671 20:18:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.671 ************************************ 00:01:08.671 END TEST ubsan 00:01:08.671 ************************************ 00:01:08.671 20:18:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.671 20:18:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.671 20:18:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.671 20:18:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:08.671 20:18:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:08.671 20:18:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:08.671 20:18:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:08.671 20:18:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:08.671 20:18:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:08.671 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:01:08.671 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:08.933 Using 'verbs' RDMA provider 00:01:19.878 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:32.108 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:32.108 Creating mk/config.mk...done. 00:01:32.108 Creating mk/cc.flags.mk...done. 00:01:32.108 Type 'make' to build. 00:01:32.108 20:18:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:32.108 20:18:49 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:32.108 20:18:49 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:32.108 20:18:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.108 ************************************ 00:01:32.108 START TEST make 00:01:32.108 ************************************ 00:01:32.108 20:18:49 -- common/autotest_common.sh@1104 -- $ make -j128 00:01:32.108 make[1]: Nothing to be done for 'all'. 00:01:36.308 The Meson build system 00:01:36.308 Version: 1.3.1 00:01:36.308 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:01:36.308 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:01:36.308 Build type: native build 00:01:36.308 Program cat found: YES (/usr/bin/cat) 00:01:36.308 Project name: DPDK 00:01:36.308 Project version: 23.11.0 00:01:36.308 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:36.308 C linker for the host machine: cc ld.bfd 2.39-16 00:01:36.308 Host machine cpu family: x86_64 00:01:36.308 Host machine cpu: x86_64 00:01:36.308 Message: ## Building in Developer Mode ## 00:01:36.308 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.308 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:36.308 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.308 Program python3 found: YES (/usr/bin/python3) 00:01:36.308 Program cat found: YES (/usr/bin/cat) 00:01:36.308 Compiler for C supports arguments -march=native: YES 00:01:36.308 Checking for size of "void *" : 8 00:01:36.309 Checking for size of "void *" : 8 (cached) 00:01:36.309 Library m found: YES 00:01:36.309 Library numa found: YES 00:01:36.309 Has header "numaif.h" : YES 00:01:36.309 Library fdt found: NO 00:01:36.309 Library execinfo found: NO 00:01:36.309 Has header "execinfo.h" : YES 00:01:36.309 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:36.309 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.309 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.309 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.309 Run-time dependency openssl found: YES 3.0.9 00:01:36.309 Run-time dependency libpcap found: YES 1.10.4 00:01:36.309 Has header "pcap.h" with dependency libpcap: YES 00:01:36.309 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.309 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.309 Compiler for C supports arguments -Wformat: YES 00:01:36.309 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.309 Compiler for C supports arguments -Wformat-security: NO 00:01:36.309 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.309 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.309 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.309 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.309 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.309 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.309 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.309 Compiler for C supports arguments -Wundef: YES 00:01:36.309 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.309 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.309 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.309 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.309 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.309 Program objdump found: YES (/usr/bin/objdump) 00:01:36.309 Compiler for C supports arguments -mavx512f: YES 00:01:36.309 Checking if "AVX512 checking" compiles: YES 00:01:36.309 Fetching value of define "__SSE4_2__" : 1 00:01:36.309 Fetching value of define "__AES__" : 1 00:01:36.309 Fetching value of define "__AVX__" : 1 00:01:36.309 Fetching value of define "__AVX2__" : 1 00:01:36.309 Fetching value of define "__AVX512BW__" : 1 00:01:36.309 Fetching value of define "__AVX512CD__" : 1 00:01:36.309 Fetching value of define "__AVX512DQ__" : 1 00:01:36.309 Fetching value of define "__AVX512F__" : 1 00:01:36.309 Fetching value of define "__AVX512VL__" : 1 00:01:36.309 Fetching value of define "__PCLMUL__" : 1 00:01:36.309 Fetching value of define "__RDRND__" : 1 00:01:36.309 Fetching value of define "__RDSEED__" : 1 00:01:36.309 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:36.309 Fetching value of define "__znver1__" : (undefined) 00:01:36.309 Fetching value of define "__znver2__" : (undefined) 00:01:36.309 Fetching value of define "__znver3__" : (undefined) 00:01:36.309 Fetching value of define "__znver4__" : (undefined) 00:01:36.309 Library asan found: YES 00:01:36.309 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.309 Message: lib/log: Defining dependency "log" 00:01:36.309 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.309 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.309 Library rt found: YES 00:01:36.309 Checking for function "getentropy" : NO 00:01:36.309 Message: lib/eal: Defining dependency "eal" 00:01:36.309 Message: lib/ring: Defining dependency "ring" 00:01:36.309 Message: lib/rcu: Defining dependency "rcu" 00:01:36.309 Message: lib/mempool: Defining dependency "mempool" 00:01:36.309 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.309 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.309 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.309 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.309 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.309 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.309 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:36.309 Compiler for C supports arguments -mpclmul: YES 00:01:36.309 Compiler for C supports arguments -maes: YES 00:01:36.309 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.309 Compiler for C supports arguments -mavx512bw: YES 00:01:36.309 Compiler for C supports arguments -mavx512dq: YES 00:01:36.309 Compiler for C supports arguments -mavx512vl: YES 00:01:36.309 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.309 Compiler for C supports arguments -mavx2: YES 00:01:36.309 Compiler for C supports arguments -mavx: YES 00:01:36.309 Message: lib/net: Defining dependency "net" 00:01:36.309 Message: lib/meter: Defining dependency "meter" 00:01:36.309 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.309 Message: lib/pci: Defining dependency "pci" 00:01:36.309 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.309 Message: lib/hash: Defining dependency "hash" 00:01:36.309 Message: lib/timer: Defining dependency "timer" 00:01:36.309 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.309 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.309 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.309 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.309 Message: lib/power: Defining dependency "power" 00:01:36.309 Message: lib/reorder: Defining dependency "reorder" 00:01:36.309 Message: lib/security: Defining dependency "security" 00:01:36.309 Has header "linux/userfaultfd.h" : YES 00:01:36.309 Has header "linux/vduse.h" : YES 00:01:36.309 Message: lib/vhost: Defining dependency "vhost" 00:01:36.309 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.309 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.309 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.309 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.309 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:36.309 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:36.309 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:36.309 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:36.309 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:36.309 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:36.309 Program doxygen found: YES (/usr/bin/doxygen) 00:01:36.309 Configuring doxy-api-html.conf using configuration 00:01:36.309 Configuring doxy-api-man.conf using configuration 00:01:36.309 Program mandb found: YES (/usr/bin/mandb) 00:01:36.309 Program sphinx-build found: NO 00:01:36.309 Configuring rte_build_config.h using configuration 00:01:36.309 Message: 00:01:36.309 ================= 00:01:36.309 Applications Enabled 00:01:36.309 ================= 00:01:36.309 00:01:36.309 apps: 00:01:36.309 00:01:36.309 00:01:36.309 Message: 00:01:36.309 ================= 00:01:36.309 Libraries Enabled 00:01:36.309 ================= 00:01:36.309 00:01:36.309 libs: 00:01:36.309 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:36.309 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:36.309 cryptodev, dmadev, power, reorder, security, vhost, 00:01:36.309 00:01:36.309 Message: 00:01:36.309 =============== 00:01:36.309 Drivers Enabled 00:01:36.309 =============== 00:01:36.309 00:01:36.309 common: 00:01:36.309 00:01:36.309 bus: 00:01:36.309 pci, vdev, 00:01:36.309 mempool: 00:01:36.309 ring, 00:01:36.309 dma: 00:01:36.309 00:01:36.309 net: 00:01:36.309 00:01:36.309 crypto: 00:01:36.309 00:01:36.309 compress: 00:01:36.309 00:01:36.309 vdpa: 00:01:36.309 00:01:36.309 00:01:36.309 Message: 00:01:36.309 ================= 00:01:36.309 Content Skipped 00:01:36.309 ================= 00:01:36.309 00:01:36.309 apps: 00:01:36.309 dumpcap: explicitly disabled via build config 00:01:36.309 graph: explicitly disabled via build config 00:01:36.309 pdump: explicitly disabled via build config 00:01:36.309 proc-info: explicitly disabled via build config 00:01:36.309 test-acl: explicitly disabled via build config 00:01:36.309 test-bbdev: explicitly disabled via build config 00:01:36.309 test-cmdline: explicitly disabled via build config 00:01:36.309 test-compress-perf: explicitly disabled via build config 00:01:36.309 test-crypto-perf: explicitly disabled via build config 00:01:36.309 test-dma-perf: explicitly disabled via build config 00:01:36.309 test-eventdev: explicitly disabled via build config 00:01:36.309 test-fib: explicitly disabled via build config 00:01:36.309 test-flow-perf: explicitly disabled via build config 00:01:36.309 test-gpudev: explicitly disabled via build config 00:01:36.309 test-mldev: explicitly disabled via build config 00:01:36.309 test-pipeline: explicitly disabled via build config 00:01:36.309 test-pmd: explicitly disabled via build config 00:01:36.309 test-regex: explicitly disabled via build config 00:01:36.309 test-sad: explicitly disabled via build config 00:01:36.309 test-security-perf: explicitly disabled via build config 00:01:36.309 00:01:36.309 libs: 00:01:36.309 metrics: explicitly disabled via build config 00:01:36.309 acl: explicitly disabled via build config 00:01:36.309 bbdev: explicitly disabled via build config 00:01:36.309 bitratestats: explicitly disabled via build config 00:01:36.309 bpf: explicitly disabled via build config 00:01:36.309 cfgfile: explicitly disabled via build config 00:01:36.309 distributor: explicitly disabled via build config 00:01:36.309 efd: explicitly disabled via build config 00:01:36.309 eventdev: explicitly disabled via build config 00:01:36.309 dispatcher: explicitly disabled via build config 00:01:36.309 gpudev: explicitly disabled via build config 00:01:36.309 gro: explicitly disabled via build config 00:01:36.309 gso: explicitly disabled via build config 00:01:36.309 ip_frag: explicitly disabled via build config 00:01:36.309 jobstats: explicitly disabled via build config 00:01:36.309 latencystats: explicitly disabled via build config 00:01:36.309 lpm: explicitly disabled via build config 00:01:36.309 member: explicitly disabled via build config 00:01:36.309 pcapng: explicitly disabled via build config 00:01:36.309 rawdev: explicitly disabled via build config 00:01:36.309 regexdev: explicitly disabled via build config 00:01:36.309 mldev: explicitly disabled via build config 00:01:36.309 rib: explicitly disabled via build config 00:01:36.309 sched: explicitly disabled via build config 00:01:36.309 stack: explicitly disabled via build config 00:01:36.309 ipsec: explicitly disabled via build config 00:01:36.309 pdcp: explicitly disabled via build config 00:01:36.309 fib: explicitly disabled via build config 00:01:36.309 port: explicitly disabled via build config 00:01:36.310 pdump: explicitly disabled via build config 00:01:36.310 table: explicitly disabled via build config 00:01:36.310 pipeline: explicitly disabled via build config 00:01:36.310 graph: explicitly disabled via build config 00:01:36.310 node: explicitly disabled via build config 00:01:36.310 00:01:36.310 drivers: 00:01:36.310 common/cpt: not in enabled drivers build config 00:01:36.310 common/dpaax: not in enabled drivers build config 00:01:36.310 common/iavf: not in enabled drivers build config 00:01:36.310 common/idpf: not in enabled drivers build config 00:01:36.310 common/mvep: not in enabled drivers build config 00:01:36.310 common/octeontx: not in enabled drivers build config 00:01:36.310 bus/auxiliary: not in enabled drivers build config 00:01:36.310 bus/cdx: not in enabled drivers build config 00:01:36.310 bus/dpaa: not in enabled drivers build config 00:01:36.310 bus/fslmc: not in enabled drivers build config 00:01:36.310 bus/ifpga: not in enabled drivers build config 00:01:36.310 bus/platform: not in enabled drivers build config 00:01:36.310 bus/vmbus: not in enabled drivers build config 00:01:36.310 common/cnxk: not in enabled drivers build config 00:01:36.310 common/mlx5: not in enabled drivers build config 00:01:36.310 common/nfp: not in enabled drivers build config 00:01:36.310 common/qat: not in enabled drivers build config 00:01:36.310 common/sfc_efx: not in enabled drivers build config 00:01:36.310 mempool/bucket: not in enabled drivers build config 00:01:36.310 mempool/cnxk: not in enabled drivers build config 00:01:36.310 mempool/dpaa: not in enabled drivers build config 00:01:36.310 mempool/dpaa2: not in enabled drivers build config 00:01:36.310 mempool/octeontx: not in enabled drivers build config 00:01:36.310 mempool/stack: not in enabled drivers build config 00:01:36.310 dma/cnxk: not in enabled drivers build config 00:01:36.310 dma/dpaa: not in enabled drivers build config 00:01:36.310 dma/dpaa2: not in enabled drivers build config 00:01:36.310 dma/hisilicon: not in enabled drivers build config 00:01:36.310 dma/idxd: not in enabled drivers build config 00:01:36.310 dma/ioat: not in enabled drivers build config 00:01:36.310 dma/skeleton: not in enabled drivers build config 00:01:36.310 net/af_packet: not in enabled drivers build config 00:01:36.310 net/af_xdp: not in enabled drivers build config 00:01:36.310 net/ark: not in enabled drivers build config 00:01:36.310 net/atlantic: not in enabled drivers build config 00:01:36.310 net/avp: not in enabled drivers build config 00:01:36.310 net/axgbe: not in enabled drivers build config 00:01:36.310 net/bnx2x: not in enabled drivers build config 00:01:36.310 net/bnxt: not in enabled drivers build config 00:01:36.310 net/bonding: not in enabled drivers build config 00:01:36.310 net/cnxk: not in enabled drivers build config 00:01:36.310 net/cpfl: not in enabled drivers build config 00:01:36.310 net/cxgbe: not in enabled drivers build config 00:01:36.310 net/dpaa: not in enabled drivers build config 00:01:36.310 net/dpaa2: not in enabled drivers build config 00:01:36.310 net/e1000: not in enabled drivers build config 00:01:36.310 net/ena: not in enabled drivers build config 00:01:36.310 net/enetc: not in enabled drivers build config 00:01:36.310 net/enetfec: not in enabled drivers build config 00:01:36.310 net/enic: not in enabled drivers build config 00:01:36.310 net/failsafe: not in enabled drivers build config 00:01:36.310 net/fm10k: not in enabled drivers build config 00:01:36.310 net/gve: not in enabled drivers build config 00:01:36.310 net/hinic: not in enabled drivers build config 00:01:36.310 net/hns3: not in enabled drivers build config 00:01:36.310 net/i40e: not in enabled drivers build config 00:01:36.310 net/iavf: not in enabled drivers build config 00:01:36.310 net/ice: not in enabled drivers build config 00:01:36.310 net/idpf: not in enabled drivers build config 00:01:36.310 net/igc: not in enabled drivers build config 00:01:36.310 net/ionic: not in enabled drivers build config 00:01:36.310 net/ipn3ke: not in enabled drivers build config 00:01:36.310 net/ixgbe: not in enabled drivers build config 00:01:36.310 net/mana: not in enabled drivers build config 00:01:36.310 net/memif: not in enabled drivers build config 00:01:36.310 net/mlx4: not in enabled drivers build config 00:01:36.310 net/mlx5: not in enabled drivers build config 00:01:36.310 net/mvneta: not in enabled drivers build config 00:01:36.310 net/mvpp2: not in enabled drivers build config 00:01:36.310 net/netvsc: not in enabled drivers build config 00:01:36.310 net/nfb: not in enabled drivers build config 00:01:36.310 net/nfp: not in enabled drivers build config 00:01:36.310 net/ngbe: not in enabled drivers build config 00:01:36.310 net/null: not in enabled drivers build config 00:01:36.310 net/octeontx: not in enabled drivers build config 00:01:36.310 net/octeon_ep: not in enabled drivers build config 00:01:36.310 net/pcap: not in enabled drivers build config 00:01:36.310 net/pfe: not in enabled drivers build config 00:01:36.310 net/qede: not in enabled drivers build config 00:01:36.310 net/ring: not in enabled drivers build config 00:01:36.310 net/sfc: not in enabled drivers build config 00:01:36.310 net/softnic: not in enabled drivers build config 00:01:36.310 net/tap: not in enabled drivers build config 00:01:36.310 net/thunderx: not in enabled drivers build config 00:01:36.310 net/txgbe: not in enabled drivers build config 00:01:36.310 net/vdev_netvsc: not in enabled drivers build config 00:01:36.310 net/vhost: not in enabled drivers build config 00:01:36.310 net/virtio: not in enabled drivers build config 00:01:36.310 net/vmxnet3: not in enabled drivers build config 00:01:36.310 raw/*: missing internal dependency, "rawdev" 00:01:36.310 crypto/armv8: not in enabled drivers build config 00:01:36.310 crypto/bcmfs: not in enabled drivers build config 00:01:36.310 crypto/caam_jr: not in enabled drivers build config 00:01:36.310 crypto/ccp: not in enabled drivers build config 00:01:36.310 crypto/cnxk: not in enabled drivers build config 00:01:36.310 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.310 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.310 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.310 crypto/mlx5: not in enabled drivers build config 00:01:36.310 crypto/mvsam: not in enabled drivers build config 00:01:36.310 crypto/nitrox: not in enabled drivers build config 00:01:36.310 crypto/null: not in enabled drivers build config 00:01:36.310 crypto/octeontx: not in enabled drivers build config 00:01:36.310 crypto/openssl: not in enabled drivers build config 00:01:36.310 crypto/scheduler: not in enabled drivers build config 00:01:36.310 crypto/uadk: not in enabled drivers build config 00:01:36.310 crypto/virtio: not in enabled drivers build config 00:01:36.310 compress/isal: not in enabled drivers build config 00:01:36.310 compress/mlx5: not in enabled drivers build config 00:01:36.310 compress/octeontx: not in enabled drivers build config 00:01:36.310 compress/zlib: not in enabled drivers build config 00:01:36.310 regex/*: missing internal dependency, "regexdev" 00:01:36.310 ml/*: missing internal dependency, "mldev" 00:01:36.310 vdpa/ifc: not in enabled drivers build config 00:01:36.310 vdpa/mlx5: not in enabled drivers build config 00:01:36.310 vdpa/nfp: not in enabled drivers build config 00:01:36.310 vdpa/sfc: not in enabled drivers build config 00:01:36.310 event/*: missing internal dependency, "eventdev" 00:01:36.310 baseband/*: missing internal dependency, "bbdev" 00:01:36.310 gpu/*: missing internal dependency, "gpudev" 00:01:36.310 00:01:36.310 00:01:36.310 Build targets in project: 84 00:01:36.310 00:01:36.310 DPDK 23.11.0 00:01:36.310 00:01:36.310 User defined options 00:01:36.310 buildtype : debug 00:01:36.310 default_library : shared 00:01:36.310 libdir : lib 00:01:36.310 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:36.310 b_sanitize : address 00:01:36.310 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:36.310 c_link_args : 00:01:36.310 cpu_instruction_set: native 00:01:36.310 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:36.310 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:36.310 enable_docs : false 00:01:36.310 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:36.310 enable_kmods : false 00:01:36.310 tests : false 00:01:36.310 00:01:36.310 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.567 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:01:36.850 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:36.850 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:36.850 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:36.850 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:36.850 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:36.850 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:36.850 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:36.850 [8/264] Linking static target lib/librte_kvargs.a 00:01:36.850 [9/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:36.850 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:36.850 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:36.850 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:36.850 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:36.850 [14/264] Linking static target lib/librte_log.a 00:01:36.850 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:36.850 [16/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.110 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.110 [18/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.110 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.110 [20/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.110 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.110 [22/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.110 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.110 [24/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.110 [25/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.110 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.110 [27/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.110 [28/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.110 [29/264] Linking static target lib/librte_pci.a 00:01:37.110 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.110 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.110 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.110 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.110 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.110 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.110 [36/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:37.110 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.110 [38/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.110 [39/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:37.366 [40/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.366 [41/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.366 [42/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:37.366 [43/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.366 [44/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.366 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.366 [46/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.366 [47/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:37.366 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.366 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.366 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.366 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.366 [52/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.366 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.366 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.366 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.366 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.366 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.366 [58/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.366 [59/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.366 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.366 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.366 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.366 [63/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.366 [64/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.366 [65/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:37.366 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.366 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.366 [68/264] Linking static target lib/librte_meter.a 00:01:37.366 [69/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.366 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.366 [71/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:37.366 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.366 [73/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:37.366 [74/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.366 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.366 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.366 [77/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.366 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.366 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.366 [80/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.366 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.366 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.623 [83/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.623 [84/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:37.623 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.623 [86/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.623 [87/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:37.623 [88/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:37.623 [89/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:37.623 [90/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.623 [91/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:37.623 [92/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.623 [93/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:37.623 [94/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:37.623 [95/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.623 [96/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.623 [97/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.623 [98/264] Linking static target lib/librte_ring.a 00:01:37.623 [99/264] Linking static target lib/librte_telemetry.a 00:01:37.623 [100/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:37.623 [101/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:37.623 [102/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:37.623 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:37.623 [104/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:37.623 [105/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.623 [106/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:37.623 [107/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:37.623 [108/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.623 [109/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:37.623 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:37.623 [111/264] Linking static target lib/librte_cmdline.a 00:01:37.623 [112/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.623 [113/264] Linking static target lib/librte_timer.a 00:01:37.623 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.623 [115/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:37.623 [116/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:37.623 [117/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:37.623 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:37.623 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.623 [120/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.623 [121/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:37.623 [122/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:37.623 [123/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.623 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.623 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.623 [126/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:37.623 [127/264] Linking static target lib/librte_dmadev.a 00:01:37.623 [128/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:37.623 [129/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:37.623 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.623 [131/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.623 [132/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:37.623 [133/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.623 [134/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:37.623 [135/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.623 [136/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.623 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.623 [138/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.623 [139/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.623 [140/264] Linking static target lib/librte_net.a 00:01:37.623 [141/264] Linking static target lib/librte_mempool.a 00:01:37.623 [142/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:37.623 [143/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.623 [144/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.623 [145/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:37.623 [146/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.623 [147/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:37.623 [148/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:37.623 [149/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:37.623 [150/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.623 [151/264] Linking target lib/librte_log.so.24.0 00:01:37.623 [152/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.623 [153/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.623 [154/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.623 [155/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:37.623 [156/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.623 [157/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.623 [158/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.623 [159/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.623 [160/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.623 [161/264] Linking static target lib/librte_reorder.a 00:01:37.623 [162/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.623 [163/264] Linking static target lib/librte_power.a 00:01:37.623 [164/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:37.623 [165/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.623 [166/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:37.623 [167/264] Linking static target lib/librte_rcu.a 00:01:37.879 [168/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:37.879 [169/264] Linking static target lib/librte_compressdev.a 00:01:37.879 [170/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:37.879 [171/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:37.879 [172/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:37.879 [173/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:37.879 [174/264] Linking static target lib/librte_eal.a 00:01:37.879 [175/264] Linking static target lib/librte_security.a 00:01:37.879 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:37.879 [177/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.879 [178/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.879 [179/264] Linking target lib/librte_kvargs.so.24.0 00:01:37.879 [180/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:37.879 [181/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.879 [182/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.879 [183/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.879 [184/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.879 [185/264] Linking static target drivers/librte_bus_vdev.a 00:01:37.879 [186/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:37.879 [187/264] Linking target lib/librte_telemetry.so.24.0 00:01:37.879 [188/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.879 [189/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:37.879 [190/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:37.879 [191/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:37.879 [192/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:37.879 [193/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.879 [194/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.879 [195/264] Linking static target drivers/librte_bus_pci.a 00:01:37.879 [196/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.879 [197/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.879 [198/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:37.879 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:37.879 [200/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:37.879 [201/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.136 [202/264] Linking static target lib/librte_mbuf.a 00:01:38.136 [203/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.136 [204/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.136 [205/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.136 [206/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.136 [207/264] Linking static target drivers/librte_mempool_ring.a 00:01:38.136 [208/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.136 [209/264] Linking static target lib/librte_hash.a 00:01:38.136 [210/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.136 [211/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.136 [212/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.136 [213/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.136 [214/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.390 [215/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:38.390 [216/264] Linking static target lib/librte_cryptodev.a 00:01:38.390 [217/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.390 [218/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.390 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.647 [220/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.905 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:38.905 [222/264] Linking static target lib/librte_ethdev.a 00:01:39.467 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.467 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.994 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:41.994 [226/264] Linking static target lib/librte_vhost.a 00:01:42.928 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.302 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.302 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.302 [230/264] Linking target lib/librte_eal.so.24.0 00:01:44.560 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:44.560 [232/264] Linking target lib/librte_timer.so.24.0 00:01:44.560 [233/264] Linking target lib/librte_ring.so.24.0 00:01:44.560 [234/264] Linking target lib/librte_pci.so.24.0 00:01:44.560 [235/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:44.560 [236/264] Linking target lib/librte_dmadev.so.24.0 00:01:44.560 [237/264] Linking target lib/librte_meter.so.24.0 00:01:44.560 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:44.560 [239/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:44.560 [240/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:44.560 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:44.560 [242/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:44.560 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:44.560 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:44.560 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:44.818 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:44.818 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:44.818 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:44.818 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:44.818 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:44.818 [251/264] Linking target lib/librte_compressdev.so.24.0 00:01:44.818 [252/264] Linking target lib/librte_net.so.24.0 00:01:44.818 [253/264] Linking target lib/librte_reorder.so.24.0 00:01:44.818 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:45.077 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:45.077 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:45.077 [257/264] Linking target lib/librte_security.so.24.0 00:01:45.077 [258/264] Linking target lib/librte_cmdline.so.24.0 00:01:45.077 [259/264] Linking target lib/librte_ethdev.so.24.0 00:01:45.077 [260/264] Linking target lib/librte_hash.so.24.0 00:01:45.077 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:45.077 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:45.077 [263/264] Linking target lib/librte_power.so.24.0 00:01:45.335 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:45.335 INFO: autodetecting backend as ninja 00:01:45.335 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:01:45.900 CC lib/log/log.o 00:01:45.900 CC lib/log/log_flags.o 00:01:45.900 CC lib/log/log_deprecated.o 00:01:45.900 CC lib/ut_mock/mock.o 00:01:45.900 CC lib/ut/ut.o 00:01:45.900 LIB libspdk_ut_mock.a 00:01:45.900 LIB libspdk_log.a 00:01:45.900 SO libspdk_ut_mock.so.5.0 00:01:45.900 LIB libspdk_ut.a 00:01:45.900 SO libspdk_log.so.6.1 00:01:45.900 SO libspdk_ut.so.1.0 00:01:45.900 SYMLINK libspdk_ut_mock.so 00:01:45.900 SYMLINK libspdk_log.so 00:01:45.900 SYMLINK libspdk_ut.so 00:01:46.158 CXX lib/trace_parser/trace.o 00:01:46.158 CC lib/util/base64.o 00:01:46.158 CC lib/util/bit_array.o 00:01:46.158 CC lib/util/cpuset.o 00:01:46.158 CC lib/ioat/ioat.o 00:01:46.158 CC lib/util/crc16.o 00:01:46.158 CC lib/util/crc64.o 00:01:46.158 CC lib/util/crc32.o 00:01:46.158 CC lib/util/crc32c.o 00:01:46.158 CC lib/util/fd.o 00:01:46.158 CC lib/util/dif.o 00:01:46.158 CC lib/util/crc32_ieee.o 00:01:46.158 CC lib/util/file.o 00:01:46.158 CC lib/util/hexlify.o 00:01:46.158 CC lib/util/iov.o 00:01:46.158 CC lib/util/strerror_tls.o 00:01:46.158 CC lib/util/math.o 00:01:46.158 CC lib/util/pipe.o 00:01:46.158 CC lib/util/string.o 00:01:46.158 CC lib/dma/dma.o 00:01:46.158 CC lib/util/fd_group.o 00:01:46.158 CC lib/util/xor.o 00:01:46.158 CC lib/util/uuid.o 00:01:46.158 CC lib/util/zipf.o 00:01:46.158 CC lib/vfio_user/host/vfio_user_pci.o 00:01:46.158 CC lib/vfio_user/host/vfio_user.o 00:01:46.416 LIB libspdk_dma.a 00:01:46.416 LIB libspdk_ioat.a 00:01:46.416 SO libspdk_dma.so.3.0 00:01:46.416 SO libspdk_ioat.so.6.0 00:01:46.416 LIB libspdk_vfio_user.a 00:01:46.416 SYMLINK libspdk_dma.so 00:01:46.416 SYMLINK libspdk_ioat.so 00:01:46.416 SO libspdk_vfio_user.so.4.0 00:01:46.416 SYMLINK libspdk_vfio_user.so 00:01:46.674 LIB libspdk_util.a 00:01:46.674 SO libspdk_util.so.8.0 00:01:46.674 LIB libspdk_trace_parser.a 00:01:46.674 SO libspdk_trace_parser.so.4.0 00:01:46.674 SYMLINK libspdk_util.so 00:01:46.674 SYMLINK libspdk_trace_parser.so 00:01:46.931 CC lib/conf/conf.o 00:01:46.931 CC lib/json/json_parse.o 00:01:46.931 CC lib/idxd/idxd_user.o 00:01:46.931 CC lib/json/json_util.o 00:01:46.931 CC lib/json/json_write.o 00:01:46.931 CC lib/idxd/idxd.o 00:01:46.931 CC lib/vmd/led.o 00:01:46.931 CC lib/rdma/common.o 00:01:46.931 CC lib/vmd/vmd.o 00:01:46.931 CC lib/rdma/rdma_verbs.o 00:01:46.931 CC lib/env_dpdk/env.o 00:01:46.931 CC lib/env_dpdk/memory.o 00:01:46.931 CC lib/env_dpdk/threads.o 00:01:46.931 CC lib/env_dpdk/init.o 00:01:46.931 CC lib/env_dpdk/pci.o 00:01:46.931 CC lib/env_dpdk/pci_virtio.o 00:01:46.931 CC lib/env_dpdk/pci_ioat.o 00:01:46.931 CC lib/env_dpdk/pci_idxd.o 00:01:46.931 CC lib/env_dpdk/pci_event.o 00:01:46.931 CC lib/env_dpdk/pci_vmd.o 00:01:46.931 CC lib/env_dpdk/pci_dpdk.o 00:01:46.931 CC lib/env_dpdk/sigbus_handler.o 00:01:46.931 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:46.931 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:47.189 LIB libspdk_json.a 00:01:47.189 LIB libspdk_conf.a 00:01:47.189 SO libspdk_json.so.5.1 00:01:47.189 SO libspdk_conf.so.5.0 00:01:47.189 LIB libspdk_rdma.a 00:01:47.189 SYMLINK libspdk_json.so 00:01:47.189 SYMLINK libspdk_conf.so 00:01:47.189 SO libspdk_rdma.so.5.0 00:01:47.189 SYMLINK libspdk_rdma.so 00:01:47.447 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:47.447 CC lib/jsonrpc/jsonrpc_server.o 00:01:47.447 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:47.447 CC lib/jsonrpc/jsonrpc_client.o 00:01:47.447 LIB libspdk_idxd.a 00:01:47.706 LIB libspdk_jsonrpc.a 00:01:47.706 SO libspdk_idxd.so.11.0 00:01:47.706 SO libspdk_jsonrpc.so.5.1 00:01:47.706 SYMLINK libspdk_idxd.so 00:01:47.706 LIB libspdk_vmd.a 00:01:47.706 SYMLINK libspdk_jsonrpc.so 00:01:47.706 SO libspdk_vmd.so.5.0 00:01:47.706 SYMLINK libspdk_vmd.so 00:01:47.706 CC lib/rpc/rpc.o 00:01:47.965 LIB libspdk_env_dpdk.a 00:01:47.965 SO libspdk_env_dpdk.so.13.0 00:01:47.965 LIB libspdk_rpc.a 00:01:47.965 SO libspdk_rpc.so.5.0 00:01:47.965 SYMLINK libspdk_env_dpdk.so 00:01:47.965 SYMLINK libspdk_rpc.so 00:01:48.284 CC lib/sock/sock.o 00:01:48.284 CC lib/sock/sock_rpc.o 00:01:48.284 CC lib/notify/notify.o 00:01:48.284 CC lib/notify/notify_rpc.o 00:01:48.284 CC lib/trace/trace.o 00:01:48.284 CC lib/trace/trace_flags.o 00:01:48.284 CC lib/trace/trace_rpc.o 00:01:48.284 LIB libspdk_notify.a 00:01:48.284 LIB libspdk_trace.a 00:01:48.284 SO libspdk_notify.so.5.0 00:01:48.284 SO libspdk_trace.so.9.0 00:01:48.284 SYMLINK libspdk_notify.so 00:01:48.562 SYMLINK libspdk_trace.so 00:01:48.562 LIB libspdk_sock.a 00:01:48.562 SO libspdk_sock.so.8.0 00:01:48.562 SYMLINK libspdk_sock.so 00:01:48.562 CC lib/thread/thread.o 00:01:48.562 CC lib/thread/iobuf.o 00:01:48.562 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:48.562 CC lib/nvme/nvme_ctrlr.o 00:01:48.562 CC lib/nvme/nvme_fabric.o 00:01:48.562 CC lib/nvme/nvme_ns_cmd.o 00:01:48.562 CC lib/nvme/nvme_pcie.o 00:01:48.562 CC lib/nvme/nvme_ns.o 00:01:48.562 CC lib/nvme/nvme_pcie_common.o 00:01:48.562 CC lib/nvme/nvme_qpair.o 00:01:48.562 CC lib/nvme/nvme_discovery.o 00:01:48.562 CC lib/nvme/nvme.o 00:01:48.562 CC lib/nvme/nvme_quirks.o 00:01:48.562 CC lib/nvme/nvme_transport.o 00:01:48.562 CC lib/nvme/nvme_tcp.o 00:01:48.562 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:48.562 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:48.562 CC lib/nvme/nvme_io_msg.o 00:01:48.562 CC lib/nvme/nvme_opal.o 00:01:48.562 CC lib/nvme/nvme_poll_group.o 00:01:48.562 CC lib/nvme/nvme_zns.o 00:01:48.562 CC lib/nvme/nvme_cuse.o 00:01:48.562 CC lib/nvme/nvme_vfio_user.o 00:01:48.562 CC lib/nvme/nvme_rdma.o 00:01:49.498 LIB libspdk_thread.a 00:01:49.498 SO libspdk_thread.so.9.0 00:01:49.754 SYMLINK libspdk_thread.so 00:01:49.754 CC lib/virtio/virtio.o 00:01:49.754 CC lib/virtio/virtio_vhost_user.o 00:01:49.754 CC lib/virtio/virtio_vfio_user.o 00:01:49.754 CC lib/virtio/virtio_pci.o 00:01:49.754 CC lib/accel/accel_rpc.o 00:01:49.754 CC lib/accel/accel.o 00:01:49.754 CC lib/accel/accel_sw.o 00:01:49.754 CC lib/blob/request.o 00:01:49.754 CC lib/init/json_config.o 00:01:49.754 CC lib/init/subsystem_rpc.o 00:01:49.754 CC lib/init/subsystem.o 00:01:49.754 CC lib/blob/blobstore.o 00:01:49.754 CC lib/init/rpc.o 00:01:49.754 CC lib/blob/zeroes.o 00:01:49.754 CC lib/blob/blob_bs_dev.o 00:01:50.012 LIB libspdk_init.a 00:01:50.012 SO libspdk_init.so.4.0 00:01:50.012 SYMLINK libspdk_init.so 00:01:50.012 LIB libspdk_virtio.a 00:01:50.271 SO libspdk_virtio.so.6.0 00:01:50.271 CC lib/event/app.o 00:01:50.271 CC lib/event/log_rpc.o 00:01:50.271 CC lib/event/reactor.o 00:01:50.271 CC lib/event/scheduler_static.o 00:01:50.271 CC lib/event/app_rpc.o 00:01:50.271 SYMLINK libspdk_virtio.so 00:01:50.271 LIB libspdk_nvme.a 00:01:50.271 SO libspdk_nvme.so.12.0 00:01:50.530 SYMLINK libspdk_nvme.so 00:01:50.790 LIB libspdk_event.a 00:01:50.790 SO libspdk_event.so.12.0 00:01:50.790 SYMLINK libspdk_event.so 00:01:51.048 LIB libspdk_accel.a 00:01:51.048 SO libspdk_accel.so.14.0 00:01:51.048 SYMLINK libspdk_accel.so 00:01:51.306 CC lib/bdev/bdev_rpc.o 00:01:51.306 CC lib/bdev/bdev.o 00:01:51.306 CC lib/bdev/bdev_zone.o 00:01:51.306 CC lib/bdev/scsi_nvme.o 00:01:51.306 CC lib/bdev/part.o 00:01:52.684 LIB libspdk_blob.a 00:01:52.684 SO libspdk_blob.so.10.1 00:01:52.684 SYMLINK libspdk_blob.so 00:01:52.684 CC lib/blobfs/blobfs.o 00:01:52.945 CC lib/blobfs/tree.o 00:01:52.945 CC lib/lvol/lvol.o 00:01:53.516 LIB libspdk_bdev.a 00:01:53.516 LIB libspdk_blobfs.a 00:01:53.516 SO libspdk_bdev.so.14.0 00:01:53.516 SO libspdk_blobfs.so.9.0 00:01:53.516 SYMLINK libspdk_bdev.so 00:01:53.516 SYMLINK libspdk_blobfs.so 00:01:53.516 CC lib/nvmf/ctrlr.o 00:01:53.516 CC lib/nvmf/ctrlr_discovery.o 00:01:53.516 CC lib/nvmf/subsystem.o 00:01:53.516 CC lib/nvmf/ctrlr_bdev.o 00:01:53.516 CC lib/nvmf/transport.o 00:01:53.516 CC lib/nvmf/nvmf_rpc.o 00:01:53.516 CC lib/nvmf/nvmf.o 00:01:53.516 CC lib/nvmf/tcp.o 00:01:53.516 CC lib/ublk/ublk_rpc.o 00:01:53.516 CC lib/ublk/ublk.o 00:01:53.516 CC lib/nvmf/rdma.o 00:01:53.516 CC lib/nbd/nbd.o 00:01:53.516 CC lib/nbd/nbd_rpc.o 00:01:53.516 CC lib/ftl/ftl_core.o 00:01:53.516 CC lib/ftl/ftl_init.o 00:01:53.516 CC lib/ftl/ftl_debug.o 00:01:53.516 CC lib/ftl/ftl_io.o 00:01:53.516 CC lib/ftl/ftl_layout.o 00:01:53.516 CC lib/ftl/ftl_l2p.o 00:01:53.516 CC lib/ftl/ftl_sb.o 00:01:53.516 CC lib/ftl/ftl_nv_cache.o 00:01:53.516 CC lib/ftl/ftl_band.o 00:01:53.516 CC lib/ftl/ftl_l2p_flat.o 00:01:53.516 CC lib/ftl/ftl_band_ops.o 00:01:53.516 CC lib/ftl/ftl_writer.o 00:01:53.516 CC lib/ftl/ftl_rq.o 00:01:53.516 CC lib/scsi/dev.o 00:01:53.516 CC lib/ftl/ftl_l2p_cache.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt.o 00:01:53.516 CC lib/ftl/ftl_reloc.o 00:01:53.516 CC lib/scsi/lun.o 00:01:53.516 CC lib/ftl/ftl_p2l.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:53.516 CC lib/scsi/port.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:53.516 CC lib/scsi/scsi.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:53.516 CC lib/scsi/scsi_pr.o 00:01:53.516 CC lib/scsi/scsi_rpc.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:53.516 CC lib/scsi/scsi_bdev.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:53.516 CC lib/scsi/task.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:53.516 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:53.775 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:53.775 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:53.775 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:53.775 CC lib/ftl/utils/ftl_conf.o 00:01:53.775 CC lib/ftl/utils/ftl_mempool.o 00:01:53.775 CC lib/ftl/utils/ftl_md.o 00:01:53.775 CC lib/ftl/utils/ftl_bitmap.o 00:01:53.775 CC lib/ftl/utils/ftl_property.o 00:01:53.775 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:53.775 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:53.775 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:53.775 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:53.775 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:53.775 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:53.775 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:53.775 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:53.775 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:53.775 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:53.775 CC lib/ftl/base/ftl_base_dev.o 00:01:53.775 CC lib/ftl/base/ftl_base_bdev.o 00:01:53.775 CC lib/ftl/ftl_trace.o 00:01:54.033 LIB libspdk_lvol.a 00:01:54.033 SO libspdk_lvol.so.9.1 00:01:54.033 SYMLINK libspdk_lvol.so 00:01:54.292 LIB libspdk_nbd.a 00:01:54.292 SO libspdk_nbd.so.6.0 00:01:54.292 LIB libspdk_scsi.a 00:01:54.292 SO libspdk_scsi.so.8.0 00:01:54.292 SYMLINK libspdk_nbd.so 00:01:54.292 SYMLINK libspdk_scsi.so 00:01:54.551 LIB libspdk_ublk.a 00:01:54.551 CC lib/vhost/vhost.o 00:01:54.551 CC lib/iscsi/conn.o 00:01:54.551 CC lib/iscsi/init_grp.o 00:01:54.551 CC lib/iscsi/iscsi.o 00:01:54.551 CC lib/vhost/vhost_scsi.o 00:01:54.551 CC lib/vhost/vhost_rpc.o 00:01:54.551 CC lib/iscsi/md5.o 00:01:54.551 CC lib/vhost/rte_vhost_user.o 00:01:54.551 CC lib/iscsi/param.o 00:01:54.551 CC lib/vhost/vhost_blk.o 00:01:54.551 CC lib/iscsi/portal_grp.o 00:01:54.551 CC lib/iscsi/tgt_node.o 00:01:54.551 CC lib/iscsi/iscsi_rpc.o 00:01:54.551 CC lib/iscsi/iscsi_subsystem.o 00:01:54.551 CC lib/iscsi/task.o 00:01:54.551 SO libspdk_ublk.so.2.0 00:01:54.551 SYMLINK libspdk_ublk.so 00:01:54.551 LIB libspdk_ftl.a 00:01:54.810 SO libspdk_ftl.so.8.0 00:01:55.069 SYMLINK libspdk_ftl.so 00:01:55.328 LIB libspdk_nvmf.a 00:01:55.328 SO libspdk_nvmf.so.17.0 00:01:55.586 LIB libspdk_vhost.a 00:01:55.586 SO libspdk_vhost.so.7.1 00:01:55.586 SYMLINK libspdk_nvmf.so 00:01:55.586 SYMLINK libspdk_vhost.so 00:01:56.157 LIB libspdk_iscsi.a 00:01:56.157 SO libspdk_iscsi.so.7.0 00:01:56.416 SYMLINK libspdk_iscsi.so 00:01:56.416 CC module/env_dpdk/env_dpdk_rpc.o 00:01:56.674 CC module/accel/ioat/accel_ioat.o 00:01:56.674 CC module/accel/ioat/accel_ioat_rpc.o 00:01:56.674 CC module/sock/posix/posix.o 00:01:56.674 CC module/accel/error/accel_error_rpc.o 00:01:56.674 CC module/accel/error/accel_error.o 00:01:56.674 CC module/accel/iaa/accel_iaa_rpc.o 00:01:56.674 CC module/accel/iaa/accel_iaa.o 00:01:56.674 CC module/accel/dsa/accel_dsa.o 00:01:56.674 CC module/accel/dsa/accel_dsa_rpc.o 00:01:56.674 CC module/scheduler/gscheduler/gscheduler.o 00:01:56.674 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:56.674 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:56.674 CC module/blob/bdev/blob_bdev.o 00:01:56.674 LIB libspdk_env_dpdk_rpc.a 00:01:56.674 SO libspdk_env_dpdk_rpc.so.5.0 00:01:56.674 LIB libspdk_scheduler_gscheduler.a 00:01:56.674 LIB libspdk_accel_ioat.a 00:01:56.674 LIB libspdk_accel_error.a 00:01:56.674 SYMLINK libspdk_env_dpdk_rpc.so 00:01:56.674 SO libspdk_scheduler_gscheduler.so.3.0 00:01:56.674 SO libspdk_accel_ioat.so.5.0 00:01:56.674 SO libspdk_accel_error.so.1.0 00:01:56.674 LIB libspdk_scheduler_dpdk_governor.a 00:01:56.674 SYMLINK libspdk_scheduler_gscheduler.so 00:01:56.674 SYMLINK libspdk_accel_ioat.so 00:01:56.674 SO libspdk_scheduler_dpdk_governor.so.3.0 00:01:56.932 LIB libspdk_accel_iaa.a 00:01:56.932 LIB libspdk_scheduler_dynamic.a 00:01:56.932 SYMLINK libspdk_accel_error.so 00:01:56.932 LIB libspdk_blob_bdev.a 00:01:56.932 SO libspdk_blob_bdev.so.10.1 00:01:56.932 SO libspdk_scheduler_dynamic.so.3.0 00:01:56.932 SO libspdk_accel_iaa.so.2.0 00:01:56.932 LIB libspdk_accel_dsa.a 00:01:56.932 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:56.932 SO libspdk_accel_dsa.so.4.0 00:01:56.932 SYMLINK libspdk_scheduler_dynamic.so 00:01:56.932 SYMLINK libspdk_blob_bdev.so 00:01:56.932 SYMLINK libspdk_accel_iaa.so 00:01:56.932 SYMLINK libspdk_accel_dsa.so 00:01:57.190 CC module/bdev/split/vbdev_split.o 00:01:57.190 CC module/bdev/ftl/bdev_ftl.o 00:01:57.190 CC module/bdev/split/vbdev_split_rpc.o 00:01:57.190 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:57.190 CC module/blobfs/bdev/blobfs_bdev.o 00:01:57.190 CC module/bdev/lvol/vbdev_lvol.o 00:01:57.190 CC module/bdev/error/vbdev_error.o 00:01:57.190 CC module/bdev/malloc/bdev_malloc.o 00:01:57.190 CC module/bdev/error/vbdev_error_rpc.o 00:01:57.190 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:57.190 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:57.190 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:57.190 CC module/bdev/aio/bdev_aio.o 00:01:57.190 CC module/bdev/passthru/vbdev_passthru.o 00:01:57.190 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:57.190 CC module/bdev/aio/bdev_aio_rpc.o 00:01:57.190 CC module/bdev/null/bdev_null.o 00:01:57.190 CC module/bdev/null/bdev_null_rpc.o 00:01:57.190 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:57.190 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:57.190 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:57.190 CC module/bdev/gpt/gpt.o 00:01:57.190 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:57.190 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:57.190 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:57.190 CC module/bdev/gpt/vbdev_gpt.o 00:01:57.190 CC module/bdev/raid/bdev_raid.o 00:01:57.190 CC module/bdev/iscsi/bdev_iscsi.o 00:01:57.190 CC module/bdev/raid/bdev_raid_sb.o 00:01:57.190 CC module/bdev/raid/bdev_raid_rpc.o 00:01:57.190 CC module/bdev/delay/vbdev_delay.o 00:01:57.190 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:57.190 CC module/bdev/raid/raid1.o 00:01:57.190 CC module/bdev/nvme/bdev_nvme.o 00:01:57.190 CC module/bdev/raid/concat.o 00:01:57.190 CC module/bdev/raid/raid0.o 00:01:57.190 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:57.190 CC module/bdev/nvme/nvme_rpc.o 00:01:57.190 CC module/bdev/nvme/vbdev_opal.o 00:01:57.190 CC module/bdev/nvme/bdev_mdns_client.o 00:01:57.190 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:57.190 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:57.190 LIB libspdk_sock_posix.a 00:01:57.190 SO libspdk_sock_posix.so.5.0 00:01:57.190 SYMLINK libspdk_sock_posix.so 00:01:57.449 LIB libspdk_blobfs_bdev.a 00:01:57.449 LIB libspdk_bdev_null.a 00:01:57.449 SO libspdk_blobfs_bdev.so.5.0 00:01:57.449 LIB libspdk_bdev_gpt.a 00:01:57.449 LIB libspdk_bdev_split.a 00:01:57.449 LIB libspdk_bdev_ftl.a 00:01:57.449 SO libspdk_bdev_null.so.5.0 00:01:57.449 SO libspdk_bdev_split.so.5.0 00:01:57.449 LIB libspdk_bdev_passthru.a 00:01:57.449 SO libspdk_bdev_gpt.so.5.0 00:01:57.449 SO libspdk_bdev_ftl.so.5.0 00:01:57.449 SYMLINK libspdk_blobfs_bdev.so 00:01:57.449 SO libspdk_bdev_passthru.so.5.0 00:01:57.449 LIB libspdk_bdev_aio.a 00:01:57.449 LIB libspdk_bdev_error.a 00:01:57.449 SYMLINK libspdk_bdev_null.so 00:01:57.449 LIB libspdk_bdev_delay.a 00:01:57.449 SYMLINK libspdk_bdev_ftl.so 00:01:57.449 SYMLINK libspdk_bdev_gpt.so 00:01:57.449 SO libspdk_bdev_aio.so.5.0 00:01:57.449 SYMLINK libspdk_bdev_split.so 00:01:57.449 SO libspdk_bdev_error.so.5.0 00:01:57.449 SO libspdk_bdev_delay.so.5.0 00:01:57.449 SYMLINK libspdk_bdev_passthru.so 00:01:57.449 LIB libspdk_bdev_zone_block.a 00:01:57.449 SYMLINK libspdk_bdev_aio.so 00:01:57.449 SYMLINK libspdk_bdev_error.so 00:01:57.449 SYMLINK libspdk_bdev_delay.so 00:01:57.709 SO libspdk_bdev_zone_block.so.5.0 00:01:57.709 LIB libspdk_bdev_malloc.a 00:01:57.709 LIB libspdk_bdev_iscsi.a 00:01:57.709 SO libspdk_bdev_malloc.so.5.0 00:01:57.709 SO libspdk_bdev_iscsi.so.5.0 00:01:57.709 LIB libspdk_bdev_virtio.a 00:01:57.709 SYMLINK libspdk_bdev_zone_block.so 00:01:57.709 SYMLINK libspdk_bdev_malloc.so 00:01:57.709 SYMLINK libspdk_bdev_iscsi.so 00:01:57.709 SO libspdk_bdev_virtio.so.5.0 00:01:57.709 LIB libspdk_bdev_lvol.a 00:01:57.709 SYMLINK libspdk_bdev_virtio.so 00:01:57.709 SO libspdk_bdev_lvol.so.5.0 00:01:57.709 SYMLINK libspdk_bdev_lvol.so 00:01:58.277 LIB libspdk_bdev_raid.a 00:01:58.277 SO libspdk_bdev_raid.so.5.0 00:01:58.277 SYMLINK libspdk_bdev_raid.so 00:01:58.844 LIB libspdk_bdev_nvme.a 00:01:58.844 SO libspdk_bdev_nvme.so.6.0 00:01:59.103 SYMLINK libspdk_bdev_nvme.so 00:01:59.361 CC module/event/subsystems/sock/sock.o 00:01:59.361 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:59.361 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:59.361 CC module/event/subsystems/vmd/vmd.o 00:01:59.361 CC module/event/subsystems/scheduler/scheduler.o 00:01:59.361 CC module/event/subsystems/iobuf/iobuf.o 00:01:59.361 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:59.361 LIB libspdk_event_vhost_blk.a 00:01:59.361 LIB libspdk_event_scheduler.a 00:01:59.361 LIB libspdk_event_sock.a 00:01:59.361 SO libspdk_event_vhost_blk.so.2.0 00:01:59.361 SO libspdk_event_scheduler.so.3.0 00:01:59.361 LIB libspdk_event_vmd.a 00:01:59.361 SO libspdk_event_sock.so.4.0 00:01:59.361 LIB libspdk_event_iobuf.a 00:01:59.361 SYMLINK libspdk_event_vhost_blk.so 00:01:59.361 SO libspdk_event_vmd.so.5.0 00:01:59.361 SYMLINK libspdk_event_sock.so 00:01:59.361 SYMLINK libspdk_event_scheduler.so 00:01:59.361 SO libspdk_event_iobuf.so.2.0 00:01:59.620 SYMLINK libspdk_event_iobuf.so 00:01:59.620 SYMLINK libspdk_event_vmd.so 00:01:59.620 CC module/event/subsystems/accel/accel.o 00:01:59.880 LIB libspdk_event_accel.a 00:01:59.880 SO libspdk_event_accel.so.5.0 00:01:59.880 SYMLINK libspdk_event_accel.so 00:02:00.139 CC module/event/subsystems/bdev/bdev.o 00:02:00.139 LIB libspdk_event_bdev.a 00:02:00.139 SO libspdk_event_bdev.so.5.0 00:02:00.139 SYMLINK libspdk_event_bdev.so 00:02:00.398 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.398 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:00.398 CC module/event/subsystems/scsi/scsi.o 00:02:00.398 CC module/event/subsystems/nbd/nbd.o 00:02:00.398 CC module/event/subsystems/ublk/ublk.o 00:02:00.657 LIB libspdk_event_scsi.a 00:02:00.657 LIB libspdk_event_nbd.a 00:02:00.657 LIB libspdk_event_ublk.a 00:02:00.657 SO libspdk_event_scsi.so.5.0 00:02:00.657 SO libspdk_event_nbd.so.5.0 00:02:00.657 SO libspdk_event_ublk.so.2.0 00:02:00.657 SYMLINK libspdk_event_scsi.so 00:02:00.657 LIB libspdk_event_nvmf.a 00:02:00.657 SYMLINK libspdk_event_ublk.so 00:02:00.657 SYMLINK libspdk_event_nbd.so 00:02:00.657 SO libspdk_event_nvmf.so.5.0 00:02:00.657 SYMLINK libspdk_event_nvmf.so 00:02:00.657 CC module/event/subsystems/iscsi/iscsi.o 00:02:00.657 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:00.916 LIB libspdk_event_iscsi.a 00:02:00.916 LIB libspdk_event_vhost_scsi.a 00:02:00.916 SO libspdk_event_iscsi.so.5.0 00:02:00.916 SO libspdk_event_vhost_scsi.so.2.0 00:02:00.916 SYMLINK libspdk_event_iscsi.so 00:02:00.916 SYMLINK libspdk_event_vhost_scsi.so 00:02:01.176 SO libspdk.so.5.0 00:02:01.176 SYMLINK libspdk.so 00:02:01.176 CC test/rpc_client/rpc_client_test.o 00:02:01.176 TEST_HEADER include/spdk/accel.h 00:02:01.176 TEST_HEADER include/spdk/assert.h 00:02:01.176 CXX app/trace/trace.o 00:02:01.176 TEST_HEADER include/spdk/accel_module.h 00:02:01.176 TEST_HEADER include/spdk/base64.h 00:02:01.176 TEST_HEADER include/spdk/barrier.h 00:02:01.176 TEST_HEADER include/spdk/bdev_module.h 00:02:01.176 TEST_HEADER include/spdk/bdev.h 00:02:01.176 TEST_HEADER include/spdk/bdev_zone.h 00:02:01.176 TEST_HEADER include/spdk/bit_array.h 00:02:01.176 CC app/trace_record/trace_record.o 00:02:01.176 TEST_HEADER include/spdk/blob_bdev.h 00:02:01.176 CC app/spdk_nvme_discover/discovery_aer.o 00:02:01.176 TEST_HEADER include/spdk/bit_pool.h 00:02:01.176 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:01.176 TEST_HEADER include/spdk/blobfs.h 00:02:01.176 TEST_HEADER include/spdk/blob.h 00:02:01.176 TEST_HEADER include/spdk/conf.h 00:02:01.176 CC app/spdk_nvme_perf/perf.o 00:02:01.176 CC app/spdk_lspci/spdk_lspci.o 00:02:01.177 TEST_HEADER include/spdk/cpuset.h 00:02:01.177 TEST_HEADER include/spdk/config.h 00:02:01.177 TEST_HEADER include/spdk/crc16.h 00:02:01.177 TEST_HEADER include/spdk/crc64.h 00:02:01.177 TEST_HEADER include/spdk/crc32.h 00:02:01.177 TEST_HEADER include/spdk/dif.h 00:02:01.177 TEST_HEADER include/spdk/dma.h 00:02:01.177 TEST_HEADER include/spdk/endian.h 00:02:01.177 TEST_HEADER include/spdk/env_dpdk.h 00:02:01.177 TEST_HEADER include/spdk/event.h 00:02:01.177 TEST_HEADER include/spdk/fd_group.h 00:02:01.177 CC app/spdk_nvme_identify/identify.o 00:02:01.177 TEST_HEADER include/spdk/env.h 00:02:01.177 TEST_HEADER include/spdk/fd.h 00:02:01.177 CC app/spdk_top/spdk_top.o 00:02:01.177 TEST_HEADER include/spdk/file.h 00:02:01.177 TEST_HEADER include/spdk/hexlify.h 00:02:01.177 TEST_HEADER include/spdk/ftl.h 00:02:01.177 TEST_HEADER include/spdk/gpt_spec.h 00:02:01.177 TEST_HEADER include/spdk/histogram_data.h 00:02:01.177 TEST_HEADER include/spdk/idxd.h 00:02:01.177 TEST_HEADER include/spdk/idxd_spec.h 00:02:01.177 TEST_HEADER include/spdk/ioat.h 00:02:01.177 TEST_HEADER include/spdk/init.h 00:02:01.177 CC app/spdk_dd/spdk_dd.o 00:02:01.177 TEST_HEADER include/spdk/ioat_spec.h 00:02:01.442 TEST_HEADER include/spdk/iscsi_spec.h 00:02:01.442 TEST_HEADER include/spdk/json.h 00:02:01.442 TEST_HEADER include/spdk/jsonrpc.h 00:02:01.442 TEST_HEADER include/spdk/likely.h 00:02:01.442 TEST_HEADER include/spdk/log.h 00:02:01.442 TEST_HEADER include/spdk/lvol.h 00:02:01.442 TEST_HEADER include/spdk/memory.h 00:02:01.442 TEST_HEADER include/spdk/mmio.h 00:02:01.442 TEST_HEADER include/spdk/nbd.h 00:02:01.442 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:01.442 TEST_HEADER include/spdk/notify.h 00:02:01.442 TEST_HEADER include/spdk/nvme_intel.h 00:02:01.442 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:01.442 TEST_HEADER include/spdk/nvme.h 00:02:01.442 CC app/iscsi_tgt/iscsi_tgt.o 00:02:01.442 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:01.442 TEST_HEADER include/spdk/nvme_spec.h 00:02:01.442 TEST_HEADER include/spdk/nvme_zns.h 00:02:01.442 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:01.442 CC app/vhost/vhost.o 00:02:01.442 CC app/spdk_tgt/spdk_tgt.o 00:02:01.442 TEST_HEADER include/spdk/nvmf.h 00:02:01.442 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:01.442 CC app/nvmf_tgt/nvmf_main.o 00:02:01.442 TEST_HEADER include/spdk/nvmf_spec.h 00:02:01.442 TEST_HEADER include/spdk/opal.h 00:02:01.442 TEST_HEADER include/spdk/nvmf_transport.h 00:02:01.442 TEST_HEADER include/spdk/opal_spec.h 00:02:01.442 TEST_HEADER include/spdk/pci_ids.h 00:02:01.442 TEST_HEADER include/spdk/pipe.h 00:02:01.442 TEST_HEADER include/spdk/queue.h 00:02:01.442 TEST_HEADER include/spdk/reduce.h 00:02:01.442 TEST_HEADER include/spdk/rpc.h 00:02:01.442 TEST_HEADER include/spdk/scsi.h 00:02:01.442 TEST_HEADER include/spdk/scsi_spec.h 00:02:01.442 TEST_HEADER include/spdk/scheduler.h 00:02:01.442 TEST_HEADER include/spdk/stdinc.h 00:02:01.442 TEST_HEADER include/spdk/sock.h 00:02:01.442 TEST_HEADER include/spdk/string.h 00:02:01.442 TEST_HEADER include/spdk/thread.h 00:02:01.442 TEST_HEADER include/spdk/trace.h 00:02:01.442 TEST_HEADER include/spdk/trace_parser.h 00:02:01.442 TEST_HEADER include/spdk/tree.h 00:02:01.442 TEST_HEADER include/spdk/ublk.h 00:02:01.442 TEST_HEADER include/spdk/uuid.h 00:02:01.442 TEST_HEADER include/spdk/version.h 00:02:01.442 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:01.442 TEST_HEADER include/spdk/util.h 00:02:01.442 TEST_HEADER include/spdk/vhost.h 00:02:01.442 TEST_HEADER include/spdk/vmd.h 00:02:01.442 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:01.442 TEST_HEADER include/spdk/xor.h 00:02:01.442 CXX test/cpp_headers/accel.o 00:02:01.442 TEST_HEADER include/spdk/zipf.h 00:02:01.442 CXX test/cpp_headers/accel_module.o 00:02:01.443 CXX test/cpp_headers/barrier.o 00:02:01.443 CXX test/cpp_headers/base64.o 00:02:01.443 CXX test/cpp_headers/assert.o 00:02:01.443 CXX test/cpp_headers/bdev_module.o 00:02:01.443 CXX test/cpp_headers/bdev.o 00:02:01.443 CXX test/cpp_headers/bit_array.o 00:02:01.443 CXX test/cpp_headers/bdev_zone.o 00:02:01.443 CXX test/cpp_headers/bit_pool.o 00:02:01.443 CXX test/cpp_headers/blob_bdev.o 00:02:01.443 CXX test/cpp_headers/blobfs.o 00:02:01.443 CXX test/cpp_headers/blobfs_bdev.o 00:02:01.443 CXX test/cpp_headers/conf.o 00:02:01.443 CXX test/cpp_headers/blob.o 00:02:01.443 CXX test/cpp_headers/config.o 00:02:01.443 CXX test/cpp_headers/crc16.o 00:02:01.443 CXX test/cpp_headers/crc64.o 00:02:01.443 CXX test/cpp_headers/crc32.o 00:02:01.443 CXX test/cpp_headers/cpuset.o 00:02:01.443 CC test/nvme/sgl/sgl.o 00:02:01.443 CXX test/cpp_headers/dif.o 00:02:01.443 CXX test/cpp_headers/dma.o 00:02:01.443 CXX test/cpp_headers/endian.o 00:02:01.443 CXX test/cpp_headers/env_dpdk.o 00:02:01.443 CXX test/cpp_headers/fd_group.o 00:02:01.443 CXX test/cpp_headers/env.o 00:02:01.443 CXX test/cpp_headers/event.o 00:02:01.443 CXX test/cpp_headers/ftl.o 00:02:01.443 CC test/app/histogram_perf/histogram_perf.o 00:02:01.443 CXX test/cpp_headers/file.o 00:02:01.443 CXX test/cpp_headers/fd.o 00:02:01.443 CXX test/cpp_headers/gpt_spec.o 00:02:01.443 CC test/app/jsoncat/jsoncat.o 00:02:01.443 CXX test/cpp_headers/hexlify.o 00:02:01.443 CXX test/cpp_headers/histogram_data.o 00:02:01.443 CXX test/cpp_headers/idxd.o 00:02:01.443 CXX test/cpp_headers/init.o 00:02:01.443 CC test/event/event_perf/event_perf.o 00:02:01.443 CXX test/cpp_headers/idxd_spec.o 00:02:01.443 CC test/env/memory/memory_ut.o 00:02:01.443 CC test/nvme/startup/startup.o 00:02:01.443 CXX test/cpp_headers/ioat.o 00:02:01.443 CC test/app/stub/stub.o 00:02:01.443 CXX test/cpp_headers/ioat_spec.o 00:02:01.443 CXX test/cpp_headers/json.o 00:02:01.443 CC test/nvme/reserve/reserve.o 00:02:01.443 CXX test/cpp_headers/iscsi_spec.o 00:02:01.443 CC test/nvme/aer/aer.o 00:02:01.443 CC test/nvme/e2edp/nvme_dp.o 00:02:01.443 CXX test/cpp_headers/jsonrpc.o 00:02:01.443 CC test/nvme/reset/reset.o 00:02:01.443 CC test/env/vtophys/vtophys.o 00:02:01.443 CXX test/cpp_headers/likely.o 00:02:01.443 CXX test/cpp_headers/log.o 00:02:01.443 CC examples/ioat/verify/verify.o 00:02:01.443 CC test/event/reactor/reactor.o 00:02:01.443 CC test/nvme/fdp/fdp.o 00:02:01.443 CXX test/cpp_headers/lvol.o 00:02:01.443 CXX test/cpp_headers/memory.o 00:02:01.443 CC test/nvme/compliance/nvme_compliance.o 00:02:01.443 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:01.443 CC test/app/bdev_svc/bdev_svc.o 00:02:01.443 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:01.443 CXX test/cpp_headers/mmio.o 00:02:01.443 CXX test/cpp_headers/nbd.o 00:02:01.443 CC test/nvme/cuse/cuse.o 00:02:01.443 CC test/nvme/err_injection/err_injection.o 00:02:01.443 CC test/event/reactor_perf/reactor_perf.o 00:02:01.443 CC test/nvme/fused_ordering/fused_ordering.o 00:02:01.443 CXX test/cpp_headers/notify.o 00:02:01.443 CC examples/sock/hello_world/hello_sock.o 00:02:01.443 CXX test/cpp_headers/nvme.o 00:02:01.443 CC test/event/app_repeat/app_repeat.o 00:02:01.443 CC test/env/pci/pci_ut.o 00:02:01.443 CC examples/ioat/perf/perf.o 00:02:01.443 CC test/nvme/connect_stress/connect_stress.o 00:02:01.443 CXX test/cpp_headers/nvme_intel.o 00:02:01.443 CC examples/accel/perf/accel_perf.o 00:02:01.443 CC examples/util/zipf/zipf.o 00:02:01.443 CXX test/cpp_headers/nvme_ocssd.o 00:02:01.443 CC test/dma/test_dma/test_dma.o 00:02:01.443 CC examples/vmd/led/led.o 00:02:01.443 CC test/nvme/overhead/overhead.o 00:02:01.443 CC test/nvme/boot_partition/boot_partition.o 00:02:01.443 CC test/nvme/simple_copy/simple_copy.o 00:02:01.443 CC examples/nvme/reconnect/reconnect.o 00:02:01.443 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.443 CC test/thread/poller_perf/poller_perf.o 00:02:01.443 CC examples/nvme/hello_world/hello_world.o 00:02:01.443 CC app/fio/nvme/fio_plugin.o 00:02:01.443 CC examples/nvme/arbitration/arbitration.o 00:02:01.443 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:01.443 CC examples/blob/cli/blobcli.o 00:02:01.443 CC test/blobfs/mkfs/mkfs.o 00:02:01.443 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:01.443 CC examples/nvme/hotplug/hotplug.o 00:02:01.443 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:01.443 CC examples/nvmf/nvmf/nvmf.o 00:02:01.443 CC examples/idxd/perf/perf.o 00:02:01.443 CC test/bdev/bdevio/bdevio.o 00:02:01.443 CC examples/nvme/abort/abort.o 00:02:01.443 CC test/event/scheduler/scheduler.o 00:02:01.443 CC examples/bdev/bdevperf/bdevperf.o 00:02:01.443 CC test/accel/dif/dif.o 00:02:01.443 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:01.443 CC app/fio/bdev/fio_plugin.o 00:02:01.712 CC examples/bdev/hello_world/hello_bdev.o 00:02:01.712 CXX test/cpp_headers/nvme_spec.o 00:02:01.712 CC examples/thread/thread/thread_ex.o 00:02:01.712 CC examples/blob/hello_world/hello_blob.o 00:02:01.712 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:01.712 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:01.712 CC test/lvol/esnap/esnap.o 00:02:01.712 CC test/env/mem_callbacks/mem_callbacks.o 00:02:01.712 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:01.980 LINK rpc_client_test 00:02:01.980 LINK spdk_lspci 00:02:01.980 LINK spdk_nvme_discover 00:02:01.980 LINK nvmf_tgt 00:02:01.980 LINK event_perf 00:02:01.980 LINK vhost 00:02:01.980 LINK jsoncat 00:02:01.980 LINK app_repeat 00:02:01.980 LINK iscsi_tgt 00:02:01.980 LINK spdk_tgt 00:02:01.980 LINK reactor 00:02:01.980 LINK interrupt_tgt 00:02:01.980 LINK reactor_perf 00:02:01.980 LINK startup 00:02:02.240 LINK vtophys 00:02:02.240 LINK histogram_perf 00:02:02.240 LINK env_dpdk_post_init 00:02:02.240 LINK zipf 00:02:02.240 LINK lsvmd 00:02:02.240 LINK bdev_svc 00:02:02.240 LINK spdk_trace_record 00:02:02.240 LINK stub 00:02:02.240 LINK doorbell_aers 00:02:02.240 LINK err_injection 00:02:02.240 LINK connect_stress 00:02:02.240 LINK reserve 00:02:02.240 LINK pmr_persistence 00:02:02.240 LINK led 00:02:02.240 LINK fused_ordering 00:02:02.240 LINK ioat_perf 00:02:02.240 LINK cmb_copy 00:02:02.240 CXX test/cpp_headers/nvme_zns.o 00:02:02.240 LINK sgl 00:02:02.240 LINK boot_partition 00:02:02.240 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.240 CXX test/cpp_headers/nvmf_cmd.o 00:02:02.240 LINK verify 00:02:02.240 LINK poller_perf 00:02:02.240 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:02.240 LINK simple_copy 00:02:02.240 CXX test/cpp_headers/nvmf_spec.o 00:02:02.240 CXX test/cpp_headers/nvmf_transport.o 00:02:02.240 CXX test/cpp_headers/nvmf.o 00:02:02.240 LINK mkfs 00:02:02.240 CXX test/cpp_headers/opal.o 00:02:02.240 CXX test/cpp_headers/opal_spec.o 00:02:02.240 LINK spdk_dd 00:02:02.240 CXX test/cpp_headers/pci_ids.o 00:02:02.240 LINK thread 00:02:02.240 CXX test/cpp_headers/pipe.o 00:02:02.240 CXX test/cpp_headers/queue.o 00:02:02.240 LINK nvme_dp 00:02:02.240 LINK overhead 00:02:02.500 CXX test/cpp_headers/reduce.o 00:02:02.500 CXX test/cpp_headers/rpc.o 00:02:02.500 CXX test/cpp_headers/scheduler.o 00:02:02.500 CXX test/cpp_headers/scsi.o 00:02:02.500 LINK hotplug 00:02:02.500 CXX test/cpp_headers/scsi_spec.o 00:02:02.500 CXX test/cpp_headers/sock.o 00:02:02.500 CXX test/cpp_headers/stdinc.o 00:02:02.500 CXX test/cpp_headers/string.o 00:02:02.500 LINK hello_world 00:02:02.500 CXX test/cpp_headers/thread.o 00:02:02.500 CXX test/cpp_headers/trace.o 00:02:02.501 LINK hello_bdev 00:02:02.501 CXX test/cpp_headers/trace_parser.o 00:02:02.501 CXX test/cpp_headers/tree.o 00:02:02.501 CXX test/cpp_headers/ublk.o 00:02:02.501 CXX test/cpp_headers/util.o 00:02:02.501 CXX test/cpp_headers/uuid.o 00:02:02.501 LINK hello_sock 00:02:02.501 CXX test/cpp_headers/version.o 00:02:02.501 LINK scheduler 00:02:02.501 LINK nvme_compliance 00:02:02.501 LINK nvmf 00:02:02.501 LINK fdp 00:02:02.501 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.501 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.501 CXX test/cpp_headers/vhost.o 00:02:02.501 CXX test/cpp_headers/vmd.o 00:02:02.501 CXX test/cpp_headers/xor.o 00:02:02.501 LINK reset 00:02:02.501 CXX test/cpp_headers/zipf.o 00:02:02.501 LINK hello_blob 00:02:02.501 LINK aer 00:02:02.501 LINK test_dma 00:02:02.501 LINK abort 00:02:02.501 LINK arbitration 00:02:02.501 LINK reconnect 00:02:02.501 LINK bdevio 00:02:02.501 LINK idxd_perf 00:02:02.759 LINK spdk_trace 00:02:02.759 LINK spdk_bdev 00:02:02.759 LINK dif 00:02:02.759 LINK pci_ut 00:02:02.759 LINK nvme_fuzz 00:02:02.759 LINK blobcli 00:02:02.759 LINK nvme_manage 00:02:02.759 LINK accel_perf 00:02:03.017 LINK spdk_nvme 00:02:03.017 LINK vhost_fuzz 00:02:03.017 LINK mem_callbacks 00:02:03.017 LINK memory_ut 00:02:03.017 LINK spdk_top 00:02:03.017 LINK spdk_nvme_perf 00:02:03.017 LINK bdevperf 00:02:03.017 LINK spdk_nvme_identify 00:02:03.017 LINK cuse 00:02:03.950 LINK iscsi_fuzz 00:02:05.844 LINK esnap 00:02:05.844 00:02:05.844 real 0m34.858s 00:02:05.844 user 5m39.274s 00:02:05.844 sys 4m22.716s 00:02:05.844 20:19:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:05.844 20:19:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.844 ************************************ 00:02:05.844 END TEST make 00:02:05.844 ************************************ 00:02:05.844 20:19:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:02:06.102 20:19:24 -- nvmf/common.sh@7 -- # uname -s 00:02:06.102 20:19:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:06.102 20:19:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:06.102 20:19:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:06.102 20:19:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:06.102 20:19:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:06.102 20:19:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:06.102 20:19:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:06.102 20:19:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:06.102 20:19:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:06.102 20:19:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:06.102 20:19:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:02:06.102 20:19:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:02:06.102 20:19:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:06.102 20:19:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:06.102 20:19:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:06.102 20:19:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:02:06.102 20:19:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:06.102 20:19:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.102 20:19:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.102 20:19:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.102 20:19:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.102 20:19:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.102 20:19:24 -- paths/export.sh@5 -- # export PATH 00:02:06.102 20:19:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.102 20:19:24 -- nvmf/common.sh@46 -- # : 0 00:02:06.102 20:19:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:06.102 20:19:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:06.102 20:19:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:06.102 20:19:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:06.102 20:19:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:06.102 20:19:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:06.102 20:19:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:06.102 20:19:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:06.102 20:19:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:06.102 20:19:24 -- spdk/autotest.sh@32 -- # uname -s 00:02:06.102 20:19:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:06.102 20:19:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:06.102 20:19:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:02:06.102 20:19:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:06.102 20:19:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:02:06.102 20:19:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:06.102 20:19:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:06.102 20:19:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:06.102 20:19:24 -- spdk/autotest.sh@48 -- # udevadm_pid=3257926 00:02:06.102 20:19:24 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:02:06.102 20:19:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:06.102 20:19:24 -- spdk/autotest.sh@54 -- # echo 3257928 00:02:06.102 20:19:24 -- spdk/autotest.sh@56 -- # echo 3257929 00:02:06.102 20:19:24 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:06.102 20:19:24 -- spdk/autotest.sh@60 -- # echo 3257930 00:02:06.102 20:19:24 -- spdk/autotest.sh@62 -- # echo 3257931 00:02:06.102 20:19:24 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:02:06.102 20:19:24 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:06.102 20:19:24 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:06.102 20:19:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:06.102 20:19:24 -- common/autotest_common.sh@10 -- # set +x 00:02:06.102 20:19:24 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:02:06.102 20:19:24 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l 00:02:06.102 20:19:24 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l 00:02:06.102 20:19:24 -- spdk/autotest.sh@70 -- # create_test_list 00:02:06.102 20:19:24 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:06.102 20:19:24 -- common/autotest_common.sh@10 -- # set +x 00:02:06.102 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:06.102 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:06.102 20:19:24 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:02:06.102 20:19:24 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:06.102 20:19:24 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:06.102 20:19:24 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:02:06.102 20:19:24 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:06.102 20:19:24 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:06.102 20:19:24 -- common/autotest_common.sh@1440 -- # uname 00:02:06.102 20:19:24 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:06.103 20:19:24 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:06.103 20:19:24 -- common/autotest_common.sh@1460 -- # uname 00:02:06.103 20:19:24 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:06.103 20:19:24 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:06.103 20:19:24 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:06.103 20:19:24 -- spdk/autotest.sh@83 -- # hash lcov 00:02:06.103 20:19:24 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:06.103 20:19:24 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:06.103 --rc lcov_branch_coverage=1 00:02:06.103 --rc lcov_function_coverage=1 00:02:06.103 --rc genhtml_branch_coverage=1 00:02:06.103 --rc genhtml_function_coverage=1 00:02:06.103 --rc genhtml_legend=1 00:02:06.103 --rc geninfo_all_blocks=1 00:02:06.103 ' 00:02:06.103 20:19:24 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:06.103 --rc lcov_branch_coverage=1 00:02:06.103 --rc lcov_function_coverage=1 00:02:06.103 --rc genhtml_branch_coverage=1 00:02:06.103 --rc genhtml_function_coverage=1 00:02:06.103 --rc genhtml_legend=1 00:02:06.103 --rc geninfo_all_blocks=1 00:02:06.103 ' 00:02:06.103 20:19:24 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:06.103 --rc lcov_branch_coverage=1 00:02:06.103 --rc lcov_function_coverage=1 00:02:06.103 --rc genhtml_branch_coverage=1 00:02:06.103 --rc genhtml_function_coverage=1 00:02:06.103 --rc genhtml_legend=1 00:02:06.103 --rc geninfo_all_blocks=1 00:02:06.103 --no-external' 00:02:06.103 20:19:24 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:06.103 --rc lcov_branch_coverage=1 00:02:06.103 --rc lcov_function_coverage=1 00:02:06.103 --rc genhtml_branch_coverage=1 00:02:06.103 --rc genhtml_function_coverage=1 00:02:06.103 --rc genhtml_legend=1 00:02:06.103 --rc geninfo_all_blocks=1 00:02:06.103 --no-external' 00:02:06.103 20:19:24 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:06.103 lcov: LCOV version 1.14 00:02:06.103 20:19:24 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:02:10.299 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:10.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:10.300 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:10.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:10.300 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:10.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:20.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:20.275 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:20.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:20.276 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:20.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:20.277 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:21.299 20:19:39 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:21.299 20:19:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:21.299 20:19:39 -- common/autotest_common.sh@10 -- # set +x 00:02:21.299 20:19:39 -- spdk/autotest.sh@102 -- # rm -f 00:02:21.299 20:19:39 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.844 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:02:23.844 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:02:23.844 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:02:23.844 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:02:23.844 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:02:23.844 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:02:23.844 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:02:23.844 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:02:23.844 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:02:23.844 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:02:23.844 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:02:23.844 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:02:23.844 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:02:23.844 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:02:23.844 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:02:23.844 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:02:23.844 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:02:24.105 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:02:24.105 20:19:42 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:24.105 20:19:42 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:24.105 20:19:42 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:24.105 20:19:42 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:24.105 20:19:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:24.105 20:19:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:24.105 20:19:42 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:24.105 20:19:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:24.105 20:19:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:24.105 20:19:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:24.105 20:19:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:02:24.105 20:19:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:02:24.105 20:19:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:24.105 20:19:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:24.105 20:19:42 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:24.105 20:19:42 -- spdk/autotest.sh@121 -- # grep -v p 00:02:24.105 20:19:42 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 00:02:24.105 20:19:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:24.105 20:19:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:24.105 20:19:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:24.105 20:19:42 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:24.105 20:19:42 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:24.105 No valid GPT data, bailing 00:02:24.105 20:19:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:24.105 20:19:42 -- scripts/common.sh@393 -- # pt= 00:02:24.105 20:19:42 -- scripts/common.sh@394 -- # return 1 00:02:24.105 20:19:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:24.105 1+0 records in 00:02:24.105 1+0 records out 00:02:24.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00291013 s, 360 MB/s 00:02:24.105 20:19:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:24.105 20:19:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:24.105 20:19:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:02:24.105 20:19:42 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:24.105 20:19:42 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:24.105 No valid GPT data, bailing 00:02:24.105 20:19:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:24.105 20:19:42 -- scripts/common.sh@393 -- # pt= 00:02:24.105 20:19:42 -- scripts/common.sh@394 -- # return 1 00:02:24.105 20:19:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:24.105 1+0 records in 00:02:24.105 1+0 records out 00:02:24.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00266704 s, 393 MB/s 00:02:24.105 20:19:42 -- spdk/autotest.sh@129 -- # sync 00:02:24.105 20:19:42 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:24.105 20:19:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:24.105 20:19:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:29.395 20:19:46 -- spdk/autotest.sh@135 -- # uname -s 00:02:29.395 20:19:46 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:29.395 20:19:46 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:29.395 20:19:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:29.395 20:19:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:29.395 20:19:46 -- common/autotest_common.sh@10 -- # set +x 00:02:29.395 ************************************ 00:02:29.395 START TEST setup.sh 00:02:29.395 ************************************ 00:02:29.395 20:19:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:29.395 * Looking for test storage... 00:02:29.395 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:29.395 20:19:46 -- setup/test-setup.sh@10 -- # uname -s 00:02:29.395 20:19:46 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:29.395 20:19:46 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:29.395 20:19:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:29.395 20:19:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:29.395 20:19:46 -- common/autotest_common.sh@10 -- # set +x 00:02:29.395 ************************************ 00:02:29.395 START TEST acl 00:02:29.395 ************************************ 00:02:29.395 20:19:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:29.395 * Looking for test storage... 00:02:29.395 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:29.395 20:19:46 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:29.395 20:19:46 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:29.395 20:19:46 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:29.395 20:19:46 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:29.395 20:19:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:29.395 20:19:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:29.395 20:19:46 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:29.395 20:19:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:29.395 20:19:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:29.395 20:19:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:29.395 20:19:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:02:29.395 20:19:46 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:02:29.395 20:19:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:29.395 20:19:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:29.395 20:19:46 -- setup/acl.sh@12 -- # devs=() 00:02:29.395 20:19:46 -- setup/acl.sh@12 -- # declare -a devs 00:02:29.395 20:19:46 -- setup/acl.sh@13 -- # drivers=() 00:02:29.395 20:19:46 -- setup/acl.sh@13 -- # declare -A drivers 00:02:29.395 20:19:46 -- setup/acl.sh@51 -- # setup reset 00:02:29.395 20:19:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:29.395 20:19:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:31.938 20:19:50 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:31.938 20:19:50 -- setup/acl.sh@16 -- # local dev driver 00:02:31.938 20:19:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.938 20:19:50 -- setup/acl.sh@15 -- # setup output status 00:02:31.938 20:19:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.938 20:19:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:02:34.480 Hugepages 00:02:34.480 node hugesize free / total 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 00:02:34.480 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.480 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:34.480 20:19:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:02:34.480 20:19:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:34.480 20:19:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:34.480 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:ca:00.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:34.740 20:19:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:34.740 20:19:52 -- setup/acl.sh@20 -- # continue 00:02:34.740 20:19:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.740 20:19:52 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:02:34.740 20:19:52 -- setup/acl.sh@54 -- # run_test denied denied 00:02:34.740 20:19:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:34.740 20:19:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:34.740 20:19:52 -- common/autotest_common.sh@10 -- # set +x 00:02:34.740 ************************************ 00:02:34.740 START TEST denied 00:02:34.740 ************************************ 00:02:34.740 20:19:52 -- common/autotest_common.sh@1104 -- # denied 00:02:34.740 20:19:52 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:c9:00.0' 00:02:34.740 20:19:52 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:c9:00.0' 00:02:34.740 20:19:52 -- setup/acl.sh@38 -- # setup output config 00:02:34.740 20:19:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.740 20:19:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:40.028 0000:c9:00.0 (8086 0a54): Skipping denied controller at 0000:c9:00.0 00:02:40.028 20:19:58 -- setup/acl.sh@40 -- # verify 0000:c9:00.0 00:02:40.028 20:19:58 -- setup/acl.sh@28 -- # local dev driver 00:02:40.028 20:19:58 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:40.028 20:19:58 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:02:40.028 20:19:58 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:02:40.028 20:19:58 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:40.028 20:19:58 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:40.028 20:19:58 -- setup/acl.sh@41 -- # setup reset 00:02:40.028 20:19:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:40.028 20:19:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.231 00:02:44.231 real 0m9.203s 00:02:44.231 user 0m2.024s 00:02:44.231 sys 0m3.849s 00:02:44.231 20:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.231 20:20:02 -- common/autotest_common.sh@10 -- # set +x 00:02:44.231 ************************************ 00:02:44.231 END TEST denied 00:02:44.231 ************************************ 00:02:44.231 20:20:02 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:44.231 20:20:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:44.231 20:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:44.231 20:20:02 -- common/autotest_common.sh@10 -- # set +x 00:02:44.231 ************************************ 00:02:44.231 START TEST allowed 00:02:44.231 ************************************ 00:02:44.231 20:20:02 -- common/autotest_common.sh@1104 -- # allowed 00:02:44.231 20:20:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:c9:00.0 00:02:44.231 20:20:02 -- setup/acl.sh@45 -- # setup output config 00:02:44.231 20:20:02 -- setup/acl.sh@46 -- # grep -E '0000:c9:00.0 .*: nvme -> .*' 00:02:44.231 20:20:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.231 20:20:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:49.539 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:02:49.539 20:20:06 -- setup/acl.sh@47 -- # verify 0000:ca:00.0 00:02:49.539 20:20:06 -- setup/acl.sh@28 -- # local dev driver 00:02:49.539 20:20:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:49.539 20:20:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:ca:00.0 ]] 00:02:49.539 20:20:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:ca:00.0/driver 00:02:49.539 20:20:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:49.539 20:20:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:49.539 20:20:06 -- setup/acl.sh@48 -- # setup reset 00:02:49.539 20:20:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.539 20:20:06 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.083 00:02:52.083 real 0m7.942s 00:02:52.083 user 0m1.955s 00:02:52.083 sys 0m3.713s 00:02:52.083 20:20:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.083 20:20:10 -- common/autotest_common.sh@10 -- # set +x 00:02:52.083 ************************************ 00:02:52.083 END TEST allowed 00:02:52.083 ************************************ 00:02:52.083 00:02:52.083 real 0m23.327s 00:02:52.083 user 0m6.069s 00:02:52.083 sys 0m11.563s 00:02:52.083 20:20:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.083 20:20:10 -- common/autotest_common.sh@10 -- # set +x 00:02:52.083 ************************************ 00:02:52.083 END TEST acl 00:02:52.083 ************************************ 00:02:52.083 20:20:10 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:52.083 20:20:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:52.083 20:20:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:52.083 20:20:10 -- common/autotest_common.sh@10 -- # set +x 00:02:52.083 ************************************ 00:02:52.083 START TEST hugepages 00:02:52.083 ************************************ 00:02:52.083 20:20:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:52.083 * Looking for test storage... 00:02:52.083 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:52.083 20:20:10 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:52.083 20:20:10 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:52.083 20:20:10 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:52.083 20:20:10 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:52.083 20:20:10 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:52.083 20:20:10 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:52.083 20:20:10 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:52.083 20:20:10 -- setup/common.sh@18 -- # local node= 00:02:52.083 20:20:10 -- setup/common.sh@19 -- # local var val 00:02:52.083 20:20:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.083 20:20:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.083 20:20:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.083 20:20:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.083 20:20:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.083 20:20:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 237702364 kB' 'MemAvailable: 241858428 kB' 'Buffers: 2696 kB' 'Cached: 11814312 kB' 'SwapCached: 0 kB' 'Active: 7732916 kB' 'Inactive: 4736864 kB' 'Active(anon): 7161396 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662376 kB' 'Mapped: 208852 kB' 'Shmem: 6508624 kB' 'KReclaimable: 653460 kB' 'Slab: 1337312 kB' 'SReclaimable: 653460 kB' 'SUnreclaim: 683852 kB' 'KernelStack: 25568 kB' 'PageTables: 11124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 135570684 kB' 'Committed_AS: 8864928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330292 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.083 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.083 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # continue 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.084 20:20:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.084 20:20:10 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:52.084 20:20:10 -- setup/common.sh@33 -- # echo 2048 00:02:52.084 20:20:10 -- setup/common.sh@33 -- # return 0 00:02:52.084 20:20:10 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:52.084 20:20:10 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:52.084 20:20:10 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:52.084 20:20:10 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:52.084 20:20:10 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:52.084 20:20:10 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:52.084 20:20:10 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:52.084 20:20:10 -- setup/hugepages.sh@207 -- # get_nodes 00:02:52.084 20:20:10 -- setup/hugepages.sh@27 -- # local node 00:02:52.084 20:20:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.084 20:20:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:52.084 20:20:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.084 20:20:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:52.084 20:20:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.084 20:20:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.084 20:20:10 -- setup/hugepages.sh@208 -- # clear_hp 00:02:52.084 20:20:10 -- setup/hugepages.sh@37 -- # local node hp 00:02:52.084 20:20:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:52.084 20:20:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:52.084 20:20:10 -- setup/hugepages.sh@41 -- # echo 0 00:02:52.084 20:20:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:52.084 20:20:10 -- setup/hugepages.sh@41 -- # echo 0 00:02:52.084 20:20:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:52.084 20:20:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:52.084 20:20:10 -- setup/hugepages.sh@41 -- # echo 0 00:02:52.084 20:20:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:52.084 20:20:10 -- setup/hugepages.sh@41 -- # echo 0 00:02:52.084 20:20:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:52.084 20:20:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:52.084 20:20:10 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:52.084 20:20:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:52.084 20:20:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:52.084 20:20:10 -- common/autotest_common.sh@10 -- # set +x 00:02:52.084 ************************************ 00:02:52.085 START TEST default_setup 00:02:52.085 ************************************ 00:02:52.085 20:20:10 -- common/autotest_common.sh@1104 -- # default_setup 00:02:52.085 20:20:10 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:52.085 20:20:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:52.085 20:20:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:52.085 20:20:10 -- setup/hugepages.sh@51 -- # shift 00:02:52.085 20:20:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:52.085 20:20:10 -- setup/hugepages.sh@52 -- # local node_ids 00:02:52.085 20:20:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.085 20:20:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:52.085 20:20:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:52.085 20:20:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:52.085 20:20:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.085 20:20:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.085 20:20:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.085 20:20:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.085 20:20:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.085 20:20:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:52.085 20:20:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:52.085 20:20:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:52.085 20:20:10 -- setup/hugepages.sh@73 -- # return 0 00:02:52.085 20:20:10 -- setup/hugepages.sh@137 -- # setup output 00:02:52.085 20:20:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.085 20:20:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:55.387 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:55.387 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:55.387 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:55.387 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:02:55.387 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:55.387 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:02:55.387 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:55.387 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:02:55.387 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:55.387 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:02:55.387 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:02:55.387 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:02:55.387 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:55.387 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:02:55.387 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:55.387 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:02:57.301 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.301 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.566 20:20:15 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:57.566 20:20:15 -- setup/hugepages.sh@89 -- # local node 00:02:57.566 20:20:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:57.566 20:20:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:57.566 20:20:15 -- setup/hugepages.sh@92 -- # local surp 00:02:57.566 20:20:15 -- setup/hugepages.sh@93 -- # local resv 00:02:57.566 20:20:15 -- setup/hugepages.sh@94 -- # local anon 00:02:57.566 20:20:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:57.566 20:20:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:57.566 20:20:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:57.566 20:20:15 -- setup/common.sh@18 -- # local node= 00:02:57.566 20:20:15 -- setup/common.sh@19 -- # local var val 00:02:57.566 20:20:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.566 20:20:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.566 20:20:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.566 20:20:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.566 20:20:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.566 20:20:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240092860 kB' 'MemAvailable: 244248348 kB' 'Buffers: 2696 kB' 'Cached: 11814592 kB' 'SwapCached: 0 kB' 'Active: 7741136 kB' 'Inactive: 4736864 kB' 'Active(anon): 7169616 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669852 kB' 'Mapped: 208896 kB' 'Shmem: 6508904 kB' 'KReclaimable: 652308 kB' 'Slab: 1327144 kB' 'SReclaimable: 652308 kB' 'SUnreclaim: 674836 kB' 'KernelStack: 24992 kB' 'PageTables: 10528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8858660 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329892 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.566 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.566 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.567 20:20:15 -- setup/common.sh@33 -- # echo 0 00:02:57.567 20:20:15 -- setup/common.sh@33 -- # return 0 00:02:57.567 20:20:15 -- setup/hugepages.sh@97 -- # anon=0 00:02:57.567 20:20:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:57.567 20:20:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.567 20:20:15 -- setup/common.sh@18 -- # local node= 00:02:57.567 20:20:15 -- setup/common.sh@19 -- # local var val 00:02:57.567 20:20:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.567 20:20:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.567 20:20:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.567 20:20:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.567 20:20:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.567 20:20:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240092052 kB' 'MemAvailable: 244247540 kB' 'Buffers: 2696 kB' 'Cached: 11814592 kB' 'SwapCached: 0 kB' 'Active: 7741140 kB' 'Inactive: 4736864 kB' 'Active(anon): 7169620 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669900 kB' 'Mapped: 208896 kB' 'Shmem: 6508904 kB' 'KReclaimable: 652308 kB' 'Slab: 1327112 kB' 'SReclaimable: 652308 kB' 'SUnreclaim: 674804 kB' 'KernelStack: 24976 kB' 'PageTables: 10136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8858852 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329876 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.567 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.567 20:20:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.568 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.568 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.568 20:20:15 -- setup/common.sh@33 -- # echo 0 00:02:57.568 20:20:15 -- setup/common.sh@33 -- # return 0 00:02:57.568 20:20:15 -- setup/hugepages.sh@99 -- # surp=0 00:02:57.568 20:20:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:57.568 20:20:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:57.568 20:20:15 -- setup/common.sh@18 -- # local node= 00:02:57.568 20:20:15 -- setup/common.sh@19 -- # local var val 00:02:57.568 20:20:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.568 20:20:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.569 20:20:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.569 20:20:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.569 20:20:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.569 20:20:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240091836 kB' 'MemAvailable: 244247324 kB' 'Buffers: 2696 kB' 'Cached: 11814592 kB' 'SwapCached: 0 kB' 'Active: 7740672 kB' 'Inactive: 4736864 kB' 'Active(anon): 7169152 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669532 kB' 'Mapped: 208844 kB' 'Shmem: 6508904 kB' 'KReclaimable: 652308 kB' 'Slab: 1327496 kB' 'SReclaimable: 652308 kB' 'SUnreclaim: 675188 kB' 'KernelStack: 24992 kB' 'PageTables: 10424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8858864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329908 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.569 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.569 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.570 20:20:15 -- setup/common.sh@33 -- # echo 0 00:02:57.570 20:20:15 -- setup/common.sh@33 -- # return 0 00:02:57.570 20:20:15 -- setup/hugepages.sh@100 -- # resv=0 00:02:57.570 20:20:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:57.570 nr_hugepages=1024 00:02:57.570 20:20:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.570 resv_hugepages=0 00:02:57.570 20:20:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.570 surplus_hugepages=0 00:02:57.570 20:20:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.570 anon_hugepages=0 00:02:57.570 20:20:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.570 20:20:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:57.570 20:20:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.570 20:20:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.570 20:20:15 -- setup/common.sh@18 -- # local node= 00:02:57.570 20:20:15 -- setup/common.sh@19 -- # local var val 00:02:57.570 20:20:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.570 20:20:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.570 20:20:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.570 20:20:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.570 20:20:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.570 20:20:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240100592 kB' 'MemAvailable: 244256080 kB' 'Buffers: 2696 kB' 'Cached: 11814616 kB' 'SwapCached: 0 kB' 'Active: 7741484 kB' 'Inactive: 4736864 kB' 'Active(anon): 7169964 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670296 kB' 'Mapped: 208836 kB' 'Shmem: 6508928 kB' 'KReclaimable: 652308 kB' 'Slab: 1327548 kB' 'SReclaimable: 652308 kB' 'SUnreclaim: 675240 kB' 'KernelStack: 24944 kB' 'PageTables: 10468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8857736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329876 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.570 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.570 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.571 20:20:15 -- setup/common.sh@33 -- # echo 1024 00:02:57.571 20:20:15 -- setup/common.sh@33 -- # return 0 00:02:57.571 20:20:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.571 20:20:15 -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.571 20:20:15 -- setup/hugepages.sh@27 -- # local node 00:02:57.571 20:20:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.571 20:20:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:57.571 20:20:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.571 20:20:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:57.571 20:20:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.571 20:20:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.571 20:20:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.571 20:20:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.571 20:20:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.571 20:20:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.571 20:20:15 -- setup/common.sh@18 -- # local node=0 00:02:57.571 20:20:15 -- setup/common.sh@19 -- # local var val 00:02:57.571 20:20:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.571 20:20:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.571 20:20:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.571 20:20:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.571 20:20:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.571 20:20:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 125239808 kB' 'MemUsed: 6576420 kB' 'SwapCached: 0 kB' 'Active: 1781168 kB' 'Inactive: 344076 kB' 'Active(anon): 1577276 kB' 'Inactive(anon): 0 kB' 'Active(file): 203892 kB' 'Inactive(file): 344076 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1848544 kB' 'Mapped: 97044 kB' 'AnonPages: 285868 kB' 'Shmem: 1300576 kB' 'KernelStack: 14184 kB' 'PageTables: 6092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334944 kB' 'Slab: 696760 kB' 'SReclaimable: 334944 kB' 'SUnreclaim: 361816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.571 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.571 20:20:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.572 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.572 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.572 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.572 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.572 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.572 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.572 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.572 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.572 20:20:15 -- setup/common.sh@32 -- # continue 00:02:57.572 20:20:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.572 20:20:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.572 20:20:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.572 20:20:15 -- setup/common.sh@33 -- # echo 0 00:02:57.572 20:20:15 -- setup/common.sh@33 -- # return 0 00:02:57.572 20:20:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.572 20:20:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.572 20:20:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.572 20:20:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.572 20:20:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:57.572 node0=1024 expecting 1024 00:02:57.572 20:20:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:57.572 00:02:57.572 real 0m5.475s 00:02:57.572 user 0m1.069s 00:02:57.572 sys 0m2.037s 00:02:57.572 20:20:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.572 20:20:15 -- common/autotest_common.sh@10 -- # set +x 00:02:57.572 ************************************ 00:02:57.572 END TEST default_setup 00:02:57.572 ************************************ 00:02:57.572 20:20:15 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:57.572 20:20:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:57.572 20:20:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:57.572 20:20:15 -- common/autotest_common.sh@10 -- # set +x 00:02:57.572 ************************************ 00:02:57.572 START TEST per_node_1G_alloc 00:02:57.572 ************************************ 00:02:57.572 20:20:15 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:02:57.572 20:20:15 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:57.572 20:20:15 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:57.572 20:20:15 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:57.572 20:20:15 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:57.572 20:20:15 -- setup/hugepages.sh@51 -- # shift 00:02:57.572 20:20:15 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:57.572 20:20:15 -- setup/hugepages.sh@52 -- # local node_ids 00:02:57.572 20:20:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.572 20:20:15 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:57.572 20:20:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:57.572 20:20:15 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:57.572 20:20:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.572 20:20:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:57.572 20:20:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.572 20:20:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.572 20:20:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.572 20:20:15 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:57.572 20:20:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:57.572 20:20:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:57.572 20:20:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:57.572 20:20:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:57.572 20:20:15 -- setup/hugepages.sh@73 -- # return 0 00:02:57.572 20:20:15 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:57.572 20:20:15 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:57.572 20:20:15 -- setup/hugepages.sh@146 -- # setup output 00:02:57.572 20:20:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.572 20:20:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:00.119 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.119 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.119 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.119 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.119 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.119 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.119 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.119 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.119 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.119 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.119 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.119 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.119 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.119 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.119 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.119 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.119 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.119 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.385 20:20:18 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:00.385 20:20:18 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:00.385 20:20:18 -- setup/hugepages.sh@89 -- # local node 00:03:00.385 20:20:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.385 20:20:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.385 20:20:18 -- setup/hugepages.sh@92 -- # local surp 00:03:00.385 20:20:18 -- setup/hugepages.sh@93 -- # local resv 00:03:00.385 20:20:18 -- setup/hugepages.sh@94 -- # local anon 00:03:00.385 20:20:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.385 20:20:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.385 20:20:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.385 20:20:18 -- setup/common.sh@18 -- # local node= 00:03:00.385 20:20:18 -- setup/common.sh@19 -- # local var val 00:03:00.385 20:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.385 20:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.385 20:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.385 20:20:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.385 20:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.385 20:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240083164 kB' 'MemAvailable: 244238636 kB' 'Buffers: 2696 kB' 'Cached: 11814704 kB' 'SwapCached: 0 kB' 'Active: 7741564 kB' 'Inactive: 4736864 kB' 'Active(anon): 7170044 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670556 kB' 'Mapped: 208904 kB' 'Shmem: 6509016 kB' 'KReclaimable: 652276 kB' 'Slab: 1327684 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675408 kB' 'KernelStack: 24848 kB' 'PageTables: 10212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8857832 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329812 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.385 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.385 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.386 20:20:18 -- setup/common.sh@33 -- # echo 0 00:03:00.386 20:20:18 -- setup/common.sh@33 -- # return 0 00:03:00.386 20:20:18 -- setup/hugepages.sh@97 -- # anon=0 00:03:00.386 20:20:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.386 20:20:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.386 20:20:18 -- setup/common.sh@18 -- # local node= 00:03:00.386 20:20:18 -- setup/common.sh@19 -- # local var val 00:03:00.386 20:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.386 20:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.386 20:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.386 20:20:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.386 20:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.386 20:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240084448 kB' 'MemAvailable: 244239920 kB' 'Buffers: 2696 kB' 'Cached: 11814704 kB' 'SwapCached: 0 kB' 'Active: 7742296 kB' 'Inactive: 4736864 kB' 'Active(anon): 7170776 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 671264 kB' 'Mapped: 208904 kB' 'Shmem: 6509016 kB' 'KReclaimable: 652276 kB' 'Slab: 1327652 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675376 kB' 'KernelStack: 24864 kB' 'PageTables: 10232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8858084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329764 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.386 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.386 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.387 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.387 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.388 20:20:18 -- setup/common.sh@33 -- # echo 0 00:03:00.388 20:20:18 -- setup/common.sh@33 -- # return 0 00:03:00.388 20:20:18 -- setup/hugepages.sh@99 -- # surp=0 00:03:00.388 20:20:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.388 20:20:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.388 20:20:18 -- setup/common.sh@18 -- # local node= 00:03:00.388 20:20:18 -- setup/common.sh@19 -- # local var val 00:03:00.388 20:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.388 20:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.388 20:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.388 20:20:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.388 20:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.388 20:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240084032 kB' 'MemAvailable: 244239504 kB' 'Buffers: 2696 kB' 'Cached: 11814708 kB' 'SwapCached: 0 kB' 'Active: 7742000 kB' 'Inactive: 4736864 kB' 'Active(anon): 7170480 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 671060 kB' 'Mapped: 208852 kB' 'Shmem: 6509020 kB' 'KReclaimable: 652276 kB' 'Slab: 1327652 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675376 kB' 'KernelStack: 24816 kB' 'PageTables: 10116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8857860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329780 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.388 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.388 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.389 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.389 20:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.390 20:20:18 -- setup/common.sh@33 -- # echo 0 00:03:00.390 20:20:18 -- setup/common.sh@33 -- # return 0 00:03:00.390 20:20:18 -- setup/hugepages.sh@100 -- # resv=0 00:03:00.390 20:20:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:00.390 nr_hugepages=1024 00:03:00.390 20:20:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.390 resv_hugepages=0 00:03:00.390 20:20:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.390 surplus_hugepages=0 00:03:00.390 20:20:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.390 anon_hugepages=0 00:03:00.390 20:20:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.390 20:20:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:00.390 20:20:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.390 20:20:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.390 20:20:18 -- setup/common.sh@18 -- # local node= 00:03:00.390 20:20:18 -- setup/common.sh@19 -- # local var val 00:03:00.390 20:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.390 20:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.390 20:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.390 20:20:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.390 20:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.390 20:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240083880 kB' 'MemAvailable: 244239352 kB' 'Buffers: 2696 kB' 'Cached: 11814716 kB' 'SwapCached: 0 kB' 'Active: 7741736 kB' 'Inactive: 4736864 kB' 'Active(anon): 7170216 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670884 kB' 'Mapped: 208852 kB' 'Shmem: 6509028 kB' 'KReclaimable: 652276 kB' 'Slab: 1327668 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675392 kB' 'KernelStack: 24816 kB' 'PageTables: 10112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8857876 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329748 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.390 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.390 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.391 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.391 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.392 20:20:18 -- setup/common.sh@33 -- # echo 1024 00:03:00.392 20:20:18 -- setup/common.sh@33 -- # return 0 00:03:00.392 20:20:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.392 20:20:18 -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.392 20:20:18 -- setup/hugepages.sh@27 -- # local node 00:03:00.392 20:20:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.392 20:20:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.392 20:20:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.392 20:20:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.392 20:20:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.392 20:20:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.392 20:20:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.392 20:20:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.392 20:20:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.392 20:20:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.392 20:20:18 -- setup/common.sh@18 -- # local node=0 00:03:00.392 20:20:18 -- setup/common.sh@19 -- # local var val 00:03:00.392 20:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.392 20:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.392 20:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.392 20:20:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.392 20:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.392 20:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 126277776 kB' 'MemUsed: 5538452 kB' 'SwapCached: 0 kB' 'Active: 1781260 kB' 'Inactive: 344076 kB' 'Active(anon): 1577368 kB' 'Inactive(anon): 0 kB' 'Active(file): 203892 kB' 'Inactive(file): 344076 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1848612 kB' 'Mapped: 97052 kB' 'AnonPages: 286132 kB' 'Shmem: 1300644 kB' 'KernelStack: 14088 kB' 'PageTables: 5828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334912 kB' 'Slab: 696440 kB' 'SReclaimable: 334912 kB' 'SUnreclaim: 361528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.392 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.392 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@33 -- # echo 0 00:03:00.393 20:20:18 -- setup/common.sh@33 -- # return 0 00:03:00.393 20:20:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.393 20:20:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.393 20:20:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.393 20:20:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:00.393 20:20:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.393 20:20:18 -- setup/common.sh@18 -- # local node=1 00:03:00.393 20:20:18 -- setup/common.sh@19 -- # local var val 00:03:00.393 20:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.393 20:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.393 20:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:00.393 20:20:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:00.393 20:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.393 20:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742240 kB' 'MemFree: 113805788 kB' 'MemUsed: 12936452 kB' 'SwapCached: 0 kB' 'Active: 5960272 kB' 'Inactive: 4392788 kB' 'Active(anon): 5592644 kB' 'Inactive(anon): 0 kB' 'Active(file): 367628 kB' 'Inactive(file): 4392788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9968824 kB' 'Mapped: 111800 kB' 'AnonPages: 384424 kB' 'Shmem: 5208408 kB' 'KernelStack: 10616 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 317364 kB' 'Slab: 631304 kB' 'SReclaimable: 317364 kB' 'SUnreclaim: 313940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.393 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.393 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.394 20:20:18 -- setup/common.sh@32 -- # continue 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.394 20:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.395 20:20:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.395 20:20:18 -- setup/common.sh@33 -- # echo 0 00:03:00.395 20:20:18 -- setup/common.sh@33 -- # return 0 00:03:00.395 20:20:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.395 20:20:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.395 20:20:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.395 20:20:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.395 20:20:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:00.395 node0=512 expecting 512 00:03:00.395 20:20:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.395 20:20:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.395 20:20:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.395 20:20:18 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:00.395 node1=512 expecting 512 00:03:00.395 20:20:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:00.395 00:03:00.395 real 0m2.793s 00:03:00.395 user 0m0.971s 00:03:00.395 sys 0m1.658s 00:03:00.395 20:20:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.395 20:20:18 -- common/autotest_common.sh@10 -- # set +x 00:03:00.395 ************************************ 00:03:00.395 END TEST per_node_1G_alloc 00:03:00.395 ************************************ 00:03:00.395 20:20:18 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:00.395 20:20:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:00.395 20:20:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:00.395 20:20:18 -- common/autotest_common.sh@10 -- # set +x 00:03:00.395 ************************************ 00:03:00.395 START TEST even_2G_alloc 00:03:00.395 ************************************ 00:03:00.395 20:20:18 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:00.395 20:20:18 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:00.395 20:20:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.395 20:20:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.395 20:20:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.395 20:20:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.395 20:20:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.395 20:20:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.395 20:20:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.395 20:20:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.395 20:20:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.395 20:20:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.395 20:20:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.395 20:20:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.395 20:20:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:00.395 20:20:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.395 20:20:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:00.395 20:20:18 -- setup/hugepages.sh@83 -- # : 512 00:03:00.395 20:20:18 -- setup/hugepages.sh@84 -- # : 1 00:03:00.395 20:20:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.395 20:20:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:00.395 20:20:18 -- setup/hugepages.sh@83 -- # : 0 00:03:00.395 20:20:18 -- setup/hugepages.sh@84 -- # : 0 00:03:00.395 20:20:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.395 20:20:18 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:00.395 20:20:18 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:00.395 20:20:18 -- setup/hugepages.sh@153 -- # setup output 00:03:00.395 20:20:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.395 20:20:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:03.752 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.752 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.752 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.752 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.752 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.752 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.752 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.752 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.752 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.752 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.752 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.752 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.752 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.752 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.752 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.752 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.752 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.752 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.752 20:20:21 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:03.752 20:20:21 -- setup/hugepages.sh@89 -- # local node 00:03:03.752 20:20:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.752 20:20:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.752 20:20:21 -- setup/hugepages.sh@92 -- # local surp 00:03:03.752 20:20:21 -- setup/hugepages.sh@93 -- # local resv 00:03:03.752 20:20:21 -- setup/hugepages.sh@94 -- # local anon 00:03:03.752 20:20:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.752 20:20:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.752 20:20:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.752 20:20:21 -- setup/common.sh@18 -- # local node= 00:03:03.752 20:20:21 -- setup/common.sh@19 -- # local var val 00:03:03.752 20:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.752 20:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.752 20:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.752 20:20:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.752 20:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.752 20:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240125732 kB' 'MemAvailable: 244281204 kB' 'Buffers: 2696 kB' 'Cached: 11814836 kB' 'SwapCached: 0 kB' 'Active: 7732224 kB' 'Inactive: 4736864 kB' 'Active(anon): 7160704 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660892 kB' 'Mapped: 207868 kB' 'Shmem: 6509148 kB' 'KReclaimable: 652276 kB' 'Slab: 1326916 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 674640 kB' 'KernelStack: 24576 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8815308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329588 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.752 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.752 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.753 20:20:21 -- setup/common.sh@33 -- # echo 0 00:03:03.753 20:20:21 -- setup/common.sh@33 -- # return 0 00:03:03.753 20:20:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:03.753 20:20:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.753 20:20:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.753 20:20:21 -- setup/common.sh@18 -- # local node= 00:03:03.753 20:20:21 -- setup/common.sh@19 -- # local var val 00:03:03.753 20:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.753 20:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.753 20:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.753 20:20:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.753 20:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.753 20:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240128368 kB' 'MemAvailable: 244283840 kB' 'Buffers: 2696 kB' 'Cached: 11814840 kB' 'SwapCached: 0 kB' 'Active: 7732544 kB' 'Inactive: 4736864 kB' 'Active(anon): 7161024 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661380 kB' 'Mapped: 207868 kB' 'Shmem: 6509152 kB' 'KReclaimable: 652276 kB' 'Slab: 1326868 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 674592 kB' 'KernelStack: 24544 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8812284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329556 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.753 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.753 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.754 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.754 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.755 20:20:21 -- setup/common.sh@33 -- # echo 0 00:03:03.755 20:20:21 -- setup/common.sh@33 -- # return 0 00:03:03.755 20:20:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:03.755 20:20:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.755 20:20:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.755 20:20:21 -- setup/common.sh@18 -- # local node= 00:03:03.755 20:20:21 -- setup/common.sh@19 -- # local var val 00:03:03.755 20:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.755 20:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.755 20:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.755 20:20:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.755 20:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.755 20:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240127864 kB' 'MemAvailable: 244283336 kB' 'Buffers: 2696 kB' 'Cached: 11814848 kB' 'SwapCached: 0 kB' 'Active: 7732760 kB' 'Inactive: 4736864 kB' 'Active(anon): 7161240 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661544 kB' 'Mapped: 207840 kB' 'Shmem: 6509160 kB' 'KReclaimable: 652276 kB' 'Slab: 1326868 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 674592 kB' 'KernelStack: 24624 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8815708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329588 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.755 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.755 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.756 20:20:21 -- setup/common.sh@33 -- # echo 0 00:03:03.756 20:20:21 -- setup/common.sh@33 -- # return 0 00:03:03.756 20:20:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:03.756 20:20:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.756 nr_hugepages=1024 00:03:03.756 20:20:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.756 resv_hugepages=0 00:03:03.756 20:20:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.756 surplus_hugepages=0 00:03:03.756 20:20:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.756 anon_hugepages=0 00:03:03.756 20:20:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.756 20:20:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.756 20:20:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.756 20:20:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.756 20:20:21 -- setup/common.sh@18 -- # local node= 00:03:03.756 20:20:21 -- setup/common.sh@19 -- # local var val 00:03:03.756 20:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.756 20:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.756 20:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.756 20:20:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.756 20:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.756 20:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240126512 kB' 'MemAvailable: 244281984 kB' 'Buffers: 2696 kB' 'Cached: 11814864 kB' 'SwapCached: 0 kB' 'Active: 7734016 kB' 'Inactive: 4736864 kB' 'Active(anon): 7162496 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663304 kB' 'Mapped: 208344 kB' 'Shmem: 6509176 kB' 'KReclaimable: 652276 kB' 'Slab: 1326856 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 674580 kB' 'KernelStack: 24608 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8818488 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329460 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.756 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.756 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.757 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.757 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.758 20:20:21 -- setup/common.sh@33 -- # echo 1024 00:03:03.758 20:20:21 -- setup/common.sh@33 -- # return 0 00:03:03.758 20:20:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.758 20:20:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.758 20:20:21 -- setup/hugepages.sh@27 -- # local node 00:03:03.758 20:20:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.758 20:20:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.758 20:20:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.758 20:20:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.758 20:20:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.758 20:20:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.758 20:20:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.758 20:20:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.758 20:20:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.758 20:20:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.758 20:20:21 -- setup/common.sh@18 -- # local node=0 00:03:03.758 20:20:21 -- setup/common.sh@19 -- # local var val 00:03:03.758 20:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.758 20:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.758 20:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.758 20:20:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.758 20:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.758 20:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 126315912 kB' 'MemUsed: 5500316 kB' 'SwapCached: 0 kB' 'Active: 1776436 kB' 'Inactive: 344076 kB' 'Active(anon): 1572544 kB' 'Inactive(anon): 0 kB' 'Active(file): 203892 kB' 'Inactive(file): 344076 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1848660 kB' 'Mapped: 96544 kB' 'AnonPages: 281104 kB' 'Shmem: 1300692 kB' 'KernelStack: 13960 kB' 'PageTables: 5096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334912 kB' 'Slab: 695660 kB' 'SReclaimable: 334912 kB' 'SUnreclaim: 360748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.758 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.758 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@33 -- # echo 0 00:03:03.759 20:20:21 -- setup/common.sh@33 -- # return 0 00:03:03.759 20:20:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.759 20:20:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.759 20:20:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.759 20:20:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:03.759 20:20:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.759 20:20:21 -- setup/common.sh@18 -- # local node=1 00:03:03.759 20:20:21 -- setup/common.sh@19 -- # local var val 00:03:03.759 20:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.759 20:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.759 20:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:03.759 20:20:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:03.759 20:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.759 20:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742240 kB' 'MemFree: 113808564 kB' 'MemUsed: 12933676 kB' 'SwapCached: 0 kB' 'Active: 5959540 kB' 'Inactive: 4392788 kB' 'Active(anon): 5591912 kB' 'Inactive(anon): 0 kB' 'Active(file): 367628 kB' 'Inactive(file): 4392788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9968904 kB' 'Mapped: 111800 kB' 'AnonPages: 384152 kB' 'Shmem: 5208488 kB' 'KernelStack: 10568 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 317364 kB' 'Slab: 631196 kB' 'SReclaimable: 317364 kB' 'SUnreclaim: 313832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.759 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.759 20:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # continue 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.760 20:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.760 20:20:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.760 20:20:21 -- setup/common.sh@33 -- # echo 0 00:03:03.760 20:20:21 -- setup/common.sh@33 -- # return 0 00:03:03.760 20:20:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.760 20:20:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.760 20:20:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.760 20:20:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.760 20:20:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:03.760 node0=512 expecting 512 00:03:03.760 20:20:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.760 20:20:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.760 20:20:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.760 20:20:21 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:03.760 node1=512 expecting 512 00:03:03.760 20:20:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:03.760 00:03:03.760 real 0m2.976s 00:03:03.760 user 0m1.007s 00:03:03.760 sys 0m1.838s 00:03:03.760 20:20:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.760 20:20:21 -- common/autotest_common.sh@10 -- # set +x 00:03:03.760 ************************************ 00:03:03.760 END TEST even_2G_alloc 00:03:03.760 ************************************ 00:03:03.760 20:20:21 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:03.760 20:20:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:03.760 20:20:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:03.760 20:20:21 -- common/autotest_common.sh@10 -- # set +x 00:03:03.760 ************************************ 00:03:03.760 START TEST odd_alloc 00:03:03.760 ************************************ 00:03:03.760 20:20:21 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:03.760 20:20:21 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:03.760 20:20:21 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:03.760 20:20:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:03.760 20:20:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.760 20:20:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:03.760 20:20:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:03.760 20:20:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:03.760 20:20:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.760 20:20:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:03.760 20:20:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.760 20:20:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.760 20:20:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.760 20:20:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:03.760 20:20:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:03.760 20:20:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.760 20:20:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:03.760 20:20:21 -- setup/hugepages.sh@83 -- # : 513 00:03:03.760 20:20:21 -- setup/hugepages.sh@84 -- # : 1 00:03:03.760 20:20:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.760 20:20:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:03.760 20:20:21 -- setup/hugepages.sh@83 -- # : 0 00:03:03.760 20:20:21 -- setup/hugepages.sh@84 -- # : 0 00:03:03.760 20:20:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.760 20:20:21 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:03.760 20:20:21 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:03.760 20:20:21 -- setup/hugepages.sh@160 -- # setup output 00:03:03.760 20:20:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.760 20:20:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:06.309 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:06.309 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.309 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:06.309 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:06.309 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:06.309 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:06.309 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:06.309 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:06.309 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:06.309 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:06.309 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:06.309 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:06.309 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:06.309 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:06.309 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.309 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:06.309 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:06.309 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:06.309 20:20:24 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:06.309 20:20:24 -- setup/hugepages.sh@89 -- # local node 00:03:06.309 20:20:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.309 20:20:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.309 20:20:24 -- setup/hugepages.sh@92 -- # local surp 00:03:06.309 20:20:24 -- setup/hugepages.sh@93 -- # local resv 00:03:06.309 20:20:24 -- setup/hugepages.sh@94 -- # local anon 00:03:06.309 20:20:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.309 20:20:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.309 20:20:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.309 20:20:24 -- setup/common.sh@18 -- # local node= 00:03:06.309 20:20:24 -- setup/common.sh@19 -- # local var val 00:03:06.309 20:20:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.309 20:20:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.309 20:20:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.309 20:20:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.309 20:20:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.309 20:20:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240116492 kB' 'MemAvailable: 244271964 kB' 'Buffers: 2696 kB' 'Cached: 11814964 kB' 'SwapCached: 0 kB' 'Active: 7732076 kB' 'Inactive: 4736864 kB' 'Active(anon): 7160556 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660428 kB' 'Mapped: 207912 kB' 'Shmem: 6509276 kB' 'KReclaimable: 652276 kB' 'Slab: 1327368 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675092 kB' 'KernelStack: 24640 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618236 kB' 'Committed_AS: 8811648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329636 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.309 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.309 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.310 20:20:24 -- setup/common.sh@33 -- # echo 0 00:03:06.310 20:20:24 -- setup/common.sh@33 -- # return 0 00:03:06.310 20:20:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:06.310 20:20:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.310 20:20:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.310 20:20:24 -- setup/common.sh@18 -- # local node= 00:03:06.310 20:20:24 -- setup/common.sh@19 -- # local var val 00:03:06.310 20:20:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.310 20:20:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.310 20:20:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.310 20:20:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.310 20:20:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.310 20:20:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240117332 kB' 'MemAvailable: 244272804 kB' 'Buffers: 2696 kB' 'Cached: 11814964 kB' 'SwapCached: 0 kB' 'Active: 7732388 kB' 'Inactive: 4736864 kB' 'Active(anon): 7160868 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660760 kB' 'Mapped: 207912 kB' 'Shmem: 6509276 kB' 'KReclaimable: 652276 kB' 'Slab: 1327352 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675076 kB' 'KernelStack: 24624 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618236 kB' 'Committed_AS: 8811660 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329572 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.310 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.310 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.311 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.311 20:20:24 -- setup/common.sh@33 -- # echo 0 00:03:06.311 20:20:24 -- setup/common.sh@33 -- # return 0 00:03:06.311 20:20:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:06.311 20:20:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.311 20:20:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.311 20:20:24 -- setup/common.sh@18 -- # local node= 00:03:06.311 20:20:24 -- setup/common.sh@19 -- # local var val 00:03:06.311 20:20:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.311 20:20:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.311 20:20:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.311 20:20:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.311 20:20:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.311 20:20:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.311 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240117364 kB' 'MemAvailable: 244272836 kB' 'Buffers: 2696 kB' 'Cached: 11814976 kB' 'SwapCached: 0 kB' 'Active: 7732204 kB' 'Inactive: 4736864 kB' 'Active(anon): 7160684 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660520 kB' 'Mapped: 207860 kB' 'Shmem: 6509288 kB' 'KReclaimable: 652276 kB' 'Slab: 1327428 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675152 kB' 'KernelStack: 24592 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618236 kB' 'Committed_AS: 8811672 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329572 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.312 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.312 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.313 20:20:24 -- setup/common.sh@33 -- # echo 0 00:03:06.313 20:20:24 -- setup/common.sh@33 -- # return 0 00:03:06.313 20:20:24 -- setup/hugepages.sh@100 -- # resv=0 00:03:06.313 20:20:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:06.313 nr_hugepages=1025 00:03:06.313 20:20:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.313 resv_hugepages=0 00:03:06.313 20:20:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.313 surplus_hugepages=0 00:03:06.313 20:20:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.313 anon_hugepages=0 00:03:06.313 20:20:24 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:06.313 20:20:24 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:06.313 20:20:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.313 20:20:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.313 20:20:24 -- setup/common.sh@18 -- # local node= 00:03:06.313 20:20:24 -- setup/common.sh@19 -- # local var val 00:03:06.313 20:20:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.313 20:20:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.313 20:20:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.313 20:20:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.313 20:20:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.313 20:20:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240116968 kB' 'MemAvailable: 244272440 kB' 'Buffers: 2696 kB' 'Cached: 11814976 kB' 'SwapCached: 0 kB' 'Active: 7733244 kB' 'Inactive: 4736864 kB' 'Active(anon): 7161724 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661616 kB' 'Mapped: 208868 kB' 'Shmem: 6509288 kB' 'KReclaimable: 652276 kB' 'Slab: 1327428 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675152 kB' 'KernelStack: 24656 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618236 kB' 'Committed_AS: 8814120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329588 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.313 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.313 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.314 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.314 20:20:24 -- setup/common.sh@33 -- # echo 1025 00:03:06.314 20:20:24 -- setup/common.sh@33 -- # return 0 00:03:06.314 20:20:24 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:06.314 20:20:24 -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.314 20:20:24 -- setup/hugepages.sh@27 -- # local node 00:03:06.314 20:20:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.314 20:20:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:06.314 20:20:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.314 20:20:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:06.314 20:20:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.314 20:20:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.314 20:20:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.314 20:20:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.314 20:20:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.314 20:20:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.314 20:20:24 -- setup/common.sh@18 -- # local node=0 00:03:06.314 20:20:24 -- setup/common.sh@19 -- # local var val 00:03:06.314 20:20:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.314 20:20:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.314 20:20:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.314 20:20:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.314 20:20:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.314 20:20:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.314 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 126306604 kB' 'MemUsed: 5509624 kB' 'SwapCached: 0 kB' 'Active: 1773192 kB' 'Inactive: 344076 kB' 'Active(anon): 1569300 kB' 'Inactive(anon): 0 kB' 'Active(file): 203892 kB' 'Inactive(file): 344076 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1848732 kB' 'Mapped: 96056 kB' 'AnonPages: 277628 kB' 'Shmem: 1300764 kB' 'KernelStack: 13928 kB' 'PageTables: 5024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334912 kB' 'Slab: 695692 kB' 'SReclaimable: 334912 kB' 'SUnreclaim: 360780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.315 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.315 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.315 20:20:24 -- setup/common.sh@33 -- # echo 0 00:03:06.315 20:20:24 -- setup/common.sh@33 -- # return 0 00:03:06.315 20:20:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.315 20:20:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.315 20:20:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.315 20:20:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:06.315 20:20:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.315 20:20:24 -- setup/common.sh@18 -- # local node=1 00:03:06.315 20:20:24 -- setup/common.sh@19 -- # local var val 00:03:06.315 20:20:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.315 20:20:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.315 20:20:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:06.316 20:20:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:06.316 20:20:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.316 20:20:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742240 kB' 'MemFree: 113809924 kB' 'MemUsed: 12932316 kB' 'SwapCached: 0 kB' 'Active: 5960152 kB' 'Inactive: 4392788 kB' 'Active(anon): 5592524 kB' 'Inactive(anon): 0 kB' 'Active(file): 367628 kB' 'Inactive(file): 4392788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9968944 kB' 'Mapped: 111804 kB' 'AnonPages: 384088 kB' 'Shmem: 5208528 kB' 'KernelStack: 10664 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 317364 kB' 'Slab: 631720 kB' 'SReclaimable: 317364 kB' 'SUnreclaim: 314356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # continue 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.316 20:20:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.316 20:20:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.316 20:20:24 -- setup/common.sh@33 -- # echo 0 00:03:06.316 20:20:24 -- setup/common.sh@33 -- # return 0 00:03:06.316 20:20:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.316 20:20:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.317 20:20:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.317 20:20:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.317 20:20:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:06.317 node0=512 expecting 513 00:03:06.317 20:20:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.317 20:20:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.317 20:20:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.317 20:20:24 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:06.317 node1=513 expecting 512 00:03:06.317 20:20:24 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:06.317 00:03:06.317 real 0m2.677s 00:03:06.317 user 0m0.919s 00:03:06.317 sys 0m1.605s 00:03:06.317 20:20:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.317 20:20:24 -- common/autotest_common.sh@10 -- # set +x 00:03:06.317 ************************************ 00:03:06.317 END TEST odd_alloc 00:03:06.317 ************************************ 00:03:06.317 20:20:24 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:06.317 20:20:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.317 20:20:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.317 20:20:24 -- common/autotest_common.sh@10 -- # set +x 00:03:06.317 ************************************ 00:03:06.317 START TEST custom_alloc 00:03:06.317 ************************************ 00:03:06.317 20:20:24 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:06.317 20:20:24 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:06.317 20:20:24 -- setup/hugepages.sh@169 -- # local node 00:03:06.317 20:20:24 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:06.317 20:20:24 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:06.317 20:20:24 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:06.317 20:20:24 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:06.317 20:20:24 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:06.317 20:20:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:06.317 20:20:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:06.317 20:20:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.317 20:20:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.317 20:20:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:06.317 20:20:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.317 20:20:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.317 20:20:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.317 20:20:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:06.317 20:20:24 -- setup/hugepages.sh@83 -- # : 256 00:03:06.317 20:20:24 -- setup/hugepages.sh@84 -- # : 1 00:03:06.317 20:20:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:06.317 20:20:24 -- setup/hugepages.sh@83 -- # : 0 00:03:06.317 20:20:24 -- setup/hugepages.sh@84 -- # : 0 00:03:06.317 20:20:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:06.317 20:20:24 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:06.317 20:20:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.317 20:20:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.317 20:20:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:06.317 20:20:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.317 20:20:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.317 20:20:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.317 20:20:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.317 20:20:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.317 20:20:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.317 20:20:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:06.317 20:20:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:06.317 20:20:24 -- setup/hugepages.sh@78 -- # return 0 00:03:06.317 20:20:24 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:06.317 20:20:24 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:06.317 20:20:24 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:06.317 20:20:24 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:06.317 20:20:24 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:06.317 20:20:24 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:06.317 20:20:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.317 20:20:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.317 20:20:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.317 20:20:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.317 20:20:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.317 20:20:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.317 20:20:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:06.317 20:20:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:06.317 20:20:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:06.317 20:20:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:06.317 20:20:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:06.317 20:20:24 -- setup/hugepages.sh@78 -- # return 0 00:03:06.317 20:20:24 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:06.317 20:20:24 -- setup/hugepages.sh@187 -- # setup output 00:03:06.317 20:20:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.317 20:20:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:08.864 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:08.864 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.864 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:08.864 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:08.864 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:08.864 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:08.864 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:08.864 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:08.864 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:08.864 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:08.864 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:08.864 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:08.864 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:08.864 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:08.864 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.864 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:08.864 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:08.864 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:08.864 20:20:27 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:08.864 20:20:27 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:08.864 20:20:27 -- setup/hugepages.sh@89 -- # local node 00:03:08.864 20:20:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.864 20:20:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.864 20:20:27 -- setup/hugepages.sh@92 -- # local surp 00:03:08.864 20:20:27 -- setup/hugepages.sh@93 -- # local resv 00:03:08.864 20:20:27 -- setup/hugepages.sh@94 -- # local anon 00:03:08.864 20:20:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.864 20:20:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.864 20:20:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.864 20:20:27 -- setup/common.sh@18 -- # local node= 00:03:08.864 20:20:27 -- setup/common.sh@19 -- # local var val 00:03:08.864 20:20:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.864 20:20:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.864 20:20:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.864 20:20:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.864 20:20:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.864 20:20:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 239070776 kB' 'MemAvailable: 243226248 kB' 'Buffers: 2696 kB' 'Cached: 11815100 kB' 'SwapCached: 0 kB' 'Active: 7733960 kB' 'Inactive: 4736864 kB' 'Active(anon): 7162440 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662192 kB' 'Mapped: 207936 kB' 'Shmem: 6509412 kB' 'KReclaimable: 652276 kB' 'Slab: 1327624 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675348 kB' 'KernelStack: 24656 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094972 kB' 'Committed_AS: 8812440 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329604 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 20:20:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.865 20:20:27 -- setup/common.sh@33 -- # echo 0 00:03:08.865 20:20:27 -- setup/common.sh@33 -- # return 0 00:03:08.865 20:20:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.865 20:20:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.865 20:20:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.865 20:20:27 -- setup/common.sh@18 -- # local node= 00:03:08.865 20:20:27 -- setup/common.sh@19 -- # local var val 00:03:08.865 20:20:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.865 20:20:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.865 20:20:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.865 20:20:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.865 20:20:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.865 20:20:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 239071112 kB' 'MemAvailable: 243226584 kB' 'Buffers: 2696 kB' 'Cached: 11815100 kB' 'SwapCached: 0 kB' 'Active: 7733920 kB' 'Inactive: 4736864 kB' 'Active(anon): 7162400 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662160 kB' 'Mapped: 207876 kB' 'Shmem: 6509412 kB' 'KReclaimable: 652276 kB' 'Slab: 1327608 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675332 kB' 'KernelStack: 24640 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094972 kB' 'Committed_AS: 8812452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329556 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 20:20:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # continue 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 20:20:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.131 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.131 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.132 20:20:27 -- setup/common.sh@33 -- # echo 0 00:03:09.132 20:20:27 -- setup/common.sh@33 -- # return 0 00:03:09.132 20:20:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:09.132 20:20:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.132 20:20:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.132 20:20:27 -- setup/common.sh@18 -- # local node= 00:03:09.132 20:20:27 -- setup/common.sh@19 -- # local var val 00:03:09.132 20:20:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.132 20:20:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.132 20:20:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.132 20:20:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.132 20:20:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.132 20:20:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 239071368 kB' 'MemAvailable: 243226840 kB' 'Buffers: 2696 kB' 'Cached: 11815112 kB' 'SwapCached: 0 kB' 'Active: 7733536 kB' 'Inactive: 4736864 kB' 'Active(anon): 7162016 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661792 kB' 'Mapped: 207876 kB' 'Shmem: 6509424 kB' 'KReclaimable: 652276 kB' 'Slab: 1327608 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675332 kB' 'KernelStack: 24608 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094972 kB' 'Committed_AS: 8812464 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329556 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.132 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.132 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.133 20:20:27 -- setup/common.sh@33 -- # echo 0 00:03:09.133 20:20:27 -- setup/common.sh@33 -- # return 0 00:03:09.133 20:20:27 -- setup/hugepages.sh@100 -- # resv=0 00:03:09.133 20:20:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:09.133 nr_hugepages=1536 00:03:09.133 20:20:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.133 resv_hugepages=0 00:03:09.133 20:20:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.133 surplus_hugepages=0 00:03:09.133 20:20:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.133 anon_hugepages=0 00:03:09.133 20:20:27 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:09.133 20:20:27 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:09.133 20:20:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.133 20:20:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.133 20:20:27 -- setup/common.sh@18 -- # local node= 00:03:09.133 20:20:27 -- setup/common.sh@19 -- # local var val 00:03:09.133 20:20:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.133 20:20:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.133 20:20:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.133 20:20:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.133 20:20:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.133 20:20:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 239071116 kB' 'MemAvailable: 243226588 kB' 'Buffers: 2696 kB' 'Cached: 11815112 kB' 'SwapCached: 0 kB' 'Active: 7734168 kB' 'Inactive: 4736864 kB' 'Active(anon): 7162648 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662464 kB' 'Mapped: 207876 kB' 'Shmem: 6509424 kB' 'KReclaimable: 652276 kB' 'Slab: 1327660 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675384 kB' 'KernelStack: 24624 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094972 kB' 'Committed_AS: 8812480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329572 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.133 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.133 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.134 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.134 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.135 20:20:27 -- setup/common.sh@33 -- # echo 1536 00:03:09.135 20:20:27 -- setup/common.sh@33 -- # return 0 00:03:09.135 20:20:27 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:09.135 20:20:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.135 20:20:27 -- setup/hugepages.sh@27 -- # local node 00:03:09.135 20:20:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.135 20:20:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.135 20:20:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.135 20:20:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.135 20:20:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.135 20:20:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.135 20:20:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.135 20:20:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.135 20:20:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.135 20:20:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.135 20:20:27 -- setup/common.sh@18 -- # local node=0 00:03:09.135 20:20:27 -- setup/common.sh@19 -- # local var val 00:03:09.135 20:20:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.135 20:20:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.135 20:20:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.135 20:20:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.135 20:20:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.135 20:20:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 126310280 kB' 'MemUsed: 5505948 kB' 'SwapCached: 0 kB' 'Active: 1772396 kB' 'Inactive: 344076 kB' 'Active(anon): 1568504 kB' 'Inactive(anon): 0 kB' 'Active(file): 203892 kB' 'Inactive(file): 344076 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1848832 kB' 'Mapped: 96072 kB' 'AnonPages: 276784 kB' 'Shmem: 1300864 kB' 'KernelStack: 13896 kB' 'PageTables: 4820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334912 kB' 'Slab: 696124 kB' 'SReclaimable: 334912 kB' 'SUnreclaim: 361212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.135 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.135 20:20:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@33 -- # echo 0 00:03:09.136 20:20:27 -- setup/common.sh@33 -- # return 0 00:03:09.136 20:20:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.136 20:20:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.136 20:20:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.136 20:20:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.136 20:20:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.136 20:20:27 -- setup/common.sh@18 -- # local node=1 00:03:09.136 20:20:27 -- setup/common.sh@19 -- # local var val 00:03:09.136 20:20:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.136 20:20:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.136 20:20:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.136 20:20:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.136 20:20:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.136 20:20:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742240 kB' 'MemFree: 112760992 kB' 'MemUsed: 13981248 kB' 'SwapCached: 0 kB' 'Active: 5960784 kB' 'Inactive: 4392788 kB' 'Active(anon): 5593156 kB' 'Inactive(anon): 0 kB' 'Active(file): 367628 kB' 'Inactive(file): 4392788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9968992 kB' 'Mapped: 111804 kB' 'AnonPages: 384636 kB' 'Shmem: 5208576 kB' 'KernelStack: 10680 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 317364 kB' 'Slab: 631536 kB' 'SReclaimable: 317364 kB' 'SUnreclaim: 314172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.136 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.136 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # continue 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.137 20:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.137 20:20:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.137 20:20:27 -- setup/common.sh@33 -- # echo 0 00:03:09.137 20:20:27 -- setup/common.sh@33 -- # return 0 00:03:09.137 20:20:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.137 20:20:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.137 20:20:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.137 20:20:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.137 20:20:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:09.137 node0=512 expecting 512 00:03:09.137 20:20:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.137 20:20:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.137 20:20:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.137 20:20:27 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:09.137 node1=1024 expecting 1024 00:03:09.137 20:20:27 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:09.137 00:03:09.137 real 0m2.897s 00:03:09.137 user 0m1.009s 00:03:09.137 sys 0m1.751s 00:03:09.137 20:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.137 20:20:27 -- common/autotest_common.sh@10 -- # set +x 00:03:09.137 ************************************ 00:03:09.137 END TEST custom_alloc 00:03:09.137 ************************************ 00:03:09.137 20:20:27 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:09.137 20:20:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.137 20:20:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.137 20:20:27 -- common/autotest_common.sh@10 -- # set +x 00:03:09.137 ************************************ 00:03:09.137 START TEST no_shrink_alloc 00:03:09.137 ************************************ 00:03:09.137 20:20:27 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:09.137 20:20:27 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:09.137 20:20:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.137 20:20:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.137 20:20:27 -- setup/hugepages.sh@51 -- # shift 00:03:09.137 20:20:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.137 20:20:27 -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.137 20:20:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.137 20:20:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.137 20:20:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.137 20:20:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.137 20:20:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.137 20:20:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.137 20:20:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.137 20:20:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.137 20:20:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.137 20:20:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.137 20:20:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.137 20:20:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:09.137 20:20:27 -- setup/hugepages.sh@73 -- # return 0 00:03:09.137 20:20:27 -- setup/hugepages.sh@198 -- # setup output 00:03:09.137 20:20:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.137 20:20:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:11.683 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:11.683 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.683 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:11.683 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:11.683 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:11.683 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:11.683 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:11.683 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:11.683 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:11.683 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:11.683 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:11.683 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:11.683 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:11.683 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:11.683 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.683 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:11.683 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:11.683 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:11.948 20:20:30 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:11.948 20:20:30 -- setup/hugepages.sh@89 -- # local node 00:03:11.948 20:20:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.948 20:20:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.948 20:20:30 -- setup/hugepages.sh@92 -- # local surp 00:03:11.948 20:20:30 -- setup/hugepages.sh@93 -- # local resv 00:03:11.948 20:20:30 -- setup/hugepages.sh@94 -- # local anon 00:03:11.948 20:20:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.948 20:20:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.948 20:20:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.948 20:20:30 -- setup/common.sh@18 -- # local node= 00:03:11.948 20:20:30 -- setup/common.sh@19 -- # local var val 00:03:11.948 20:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.948 20:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.948 20:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.948 20:20:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.948 20:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.948 20:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240145292 kB' 'MemAvailable: 244300764 kB' 'Buffers: 2696 kB' 'Cached: 11815224 kB' 'SwapCached: 0 kB' 'Active: 7737492 kB' 'Inactive: 4736864 kB' 'Active(anon): 7165972 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666280 kB' 'Mapped: 208024 kB' 'Shmem: 6509536 kB' 'KReclaimable: 652276 kB' 'Slab: 1327636 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675360 kB' 'KernelStack: 24640 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8812640 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329716 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.948 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.948 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.949 20:20:30 -- setup/common.sh@33 -- # echo 0 00:03:11.949 20:20:30 -- setup/common.sh@33 -- # return 0 00:03:11.949 20:20:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:11.949 20:20:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.949 20:20:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.949 20:20:30 -- setup/common.sh@18 -- # local node= 00:03:11.949 20:20:30 -- setup/common.sh@19 -- # local var val 00:03:11.949 20:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.949 20:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.949 20:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.949 20:20:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.949 20:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.949 20:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240145292 kB' 'MemAvailable: 244300764 kB' 'Buffers: 2696 kB' 'Cached: 11815224 kB' 'SwapCached: 0 kB' 'Active: 7737828 kB' 'Inactive: 4736864 kB' 'Active(anon): 7166308 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666560 kB' 'Mapped: 207948 kB' 'Shmem: 6509536 kB' 'KReclaimable: 652276 kB' 'Slab: 1327644 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675368 kB' 'KernelStack: 24704 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8812652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329700 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.949 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.949 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.950 20:20:30 -- setup/common.sh@33 -- # echo 0 00:03:11.950 20:20:30 -- setup/common.sh@33 -- # return 0 00:03:11.950 20:20:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:11.950 20:20:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.950 20:20:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.950 20:20:30 -- setup/common.sh@18 -- # local node= 00:03:11.950 20:20:30 -- setup/common.sh@19 -- # local var val 00:03:11.950 20:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.950 20:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.950 20:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.950 20:20:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.950 20:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.950 20:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240145292 kB' 'MemAvailable: 244300764 kB' 'Buffers: 2696 kB' 'Cached: 11815224 kB' 'SwapCached: 0 kB' 'Active: 7737120 kB' 'Inactive: 4736864 kB' 'Active(anon): 7165600 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665812 kB' 'Mapped: 207900 kB' 'Shmem: 6509536 kB' 'KReclaimable: 652276 kB' 'Slab: 1327644 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675368 kB' 'KernelStack: 24704 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8812664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329684 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.950 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.950 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.951 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.951 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.952 20:20:30 -- setup/common.sh@33 -- # echo 0 00:03:11.952 20:20:30 -- setup/common.sh@33 -- # return 0 00:03:11.952 20:20:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:11.952 20:20:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.952 nr_hugepages=1024 00:03:11.952 20:20:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.952 resv_hugepages=0 00:03:11.952 20:20:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.952 surplus_hugepages=0 00:03:11.952 20:20:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.952 anon_hugepages=0 00:03:11.952 20:20:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.952 20:20:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.952 20:20:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.952 20:20:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.952 20:20:30 -- setup/common.sh@18 -- # local node= 00:03:11.952 20:20:30 -- setup/common.sh@19 -- # local var val 00:03:11.952 20:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.952 20:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.952 20:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.952 20:20:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.952 20:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.952 20:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240145556 kB' 'MemAvailable: 244301028 kB' 'Buffers: 2696 kB' 'Cached: 11815248 kB' 'SwapCached: 0 kB' 'Active: 7736956 kB' 'Inactive: 4736864 kB' 'Active(anon): 7165436 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665612 kB' 'Mapped: 207900 kB' 'Shmem: 6509560 kB' 'KReclaimable: 652276 kB' 'Slab: 1327616 kB' 'SReclaimable: 652276 kB' 'SUnreclaim: 675340 kB' 'KernelStack: 24640 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8812680 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329684 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.952 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.952 20:20:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.953 20:20:30 -- setup/common.sh@33 -- # echo 1024 00:03:11.953 20:20:30 -- setup/common.sh@33 -- # return 0 00:03:11.953 20:20:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.953 20:20:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.953 20:20:30 -- setup/hugepages.sh@27 -- # local node 00:03:11.953 20:20:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.953 20:20:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:11.953 20:20:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.953 20:20:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.953 20:20:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.953 20:20:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.953 20:20:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.953 20:20:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.953 20:20:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.953 20:20:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.953 20:20:30 -- setup/common.sh@18 -- # local node=0 00:03:11.953 20:20:30 -- setup/common.sh@19 -- # local var val 00:03:11.953 20:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.953 20:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.953 20:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.953 20:20:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.953 20:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.953 20:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.953 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.953 20:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 125278144 kB' 'MemUsed: 6538084 kB' 'SwapCached: 0 kB' 'Active: 1775020 kB' 'Inactive: 344076 kB' 'Active(anon): 1571128 kB' 'Inactive(anon): 0 kB' 'Active(file): 203892 kB' 'Inactive(file): 344076 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1848928 kB' 'Mapped: 96088 kB' 'AnonPages: 279748 kB' 'Shmem: 1300960 kB' 'KernelStack: 14008 kB' 'PageTables: 5072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334816 kB' 'Slab: 696164 kB' 'SReclaimable: 334816 kB' 'SUnreclaim: 361348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.953 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # continue 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.954 20:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.954 20:20:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.954 20:20:30 -- setup/common.sh@33 -- # echo 0 00:03:11.954 20:20:30 -- setup/common.sh@33 -- # return 0 00:03:11.954 20:20:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.954 20:20:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.954 20:20:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.954 20:20:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.954 20:20:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:11.954 node0=1024 expecting 1024 00:03:11.954 20:20:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:11.954 20:20:30 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:11.954 20:20:30 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:11.954 20:20:30 -- setup/hugepages.sh@202 -- # setup output 00:03:11.954 20:20:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.954 20:20:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:15.257 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:15.257 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.257 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:15.257 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:15.257 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:15.257 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:15.257 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:15.257 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:15.257 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:15.257 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:15.257 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:15.257 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:15.257 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:15.257 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:15.257 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.257 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:15.257 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:15.257 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:15.257 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:15.257 20:20:33 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:15.257 20:20:33 -- setup/hugepages.sh@89 -- # local node 00:03:15.257 20:20:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.257 20:20:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.257 20:20:33 -- setup/hugepages.sh@92 -- # local surp 00:03:15.257 20:20:33 -- setup/hugepages.sh@93 -- # local resv 00:03:15.257 20:20:33 -- setup/hugepages.sh@94 -- # local anon 00:03:15.257 20:20:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.257 20:20:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.257 20:20:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.257 20:20:33 -- setup/common.sh@18 -- # local node= 00:03:15.257 20:20:33 -- setup/common.sh@19 -- # local var val 00:03:15.257 20:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.257 20:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.257 20:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.257 20:20:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.257 20:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.257 20:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240152668 kB' 'MemAvailable: 244308060 kB' 'Buffers: 2696 kB' 'Cached: 11823416 kB' 'SwapCached: 0 kB' 'Active: 7744208 kB' 'Inactive: 4736864 kB' 'Active(anon): 7172688 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663648 kB' 'Mapped: 208024 kB' 'Shmem: 6517728 kB' 'KReclaimable: 652116 kB' 'Slab: 1328436 kB' 'SReclaimable: 652116 kB' 'SUnreclaim: 676320 kB' 'KernelStack: 24736 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8821616 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329748 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.258 20:20:33 -- setup/common.sh@33 -- # echo 0 00:03:15.258 20:20:33 -- setup/common.sh@33 -- # return 0 00:03:15.258 20:20:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.258 20:20:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.258 20:20:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.258 20:20:33 -- setup/common.sh@18 -- # local node= 00:03:15.258 20:20:33 -- setup/common.sh@19 -- # local var val 00:03:15.258 20:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.258 20:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.258 20:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.258 20:20:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.258 20:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.258 20:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240153424 kB' 'MemAvailable: 244308816 kB' 'Buffers: 2696 kB' 'Cached: 11823544 kB' 'SwapCached: 0 kB' 'Active: 7744924 kB' 'Inactive: 4736864 kB' 'Active(anon): 7173404 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664296 kB' 'Mapped: 207836 kB' 'Shmem: 6517856 kB' 'KReclaimable: 652116 kB' 'Slab: 1328424 kB' 'SReclaimable: 652116 kB' 'SUnreclaim: 676308 kB' 'KernelStack: 24720 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8821628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329716 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 20:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 20:20:33 -- setup/common.sh@33 -- # echo 0 00:03:15.259 20:20:33 -- setup/common.sh@33 -- # return 0 00:03:15.259 20:20:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.259 20:20:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.259 20:20:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.259 20:20:33 -- setup/common.sh@18 -- # local node= 00:03:15.259 20:20:33 -- setup/common.sh@19 -- # local var val 00:03:15.259 20:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.259 20:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.259 20:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.259 20:20:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.259 20:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.259 20:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240153512 kB' 'MemAvailable: 244308904 kB' 'Buffers: 2696 kB' 'Cached: 11823544 kB' 'SwapCached: 0 kB' 'Active: 7743152 kB' 'Inactive: 4736864 kB' 'Active(anon): 7171632 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662968 kB' 'Mapped: 207720 kB' 'Shmem: 6517856 kB' 'KReclaimable: 652116 kB' 'Slab: 1328360 kB' 'SReclaimable: 652116 kB' 'SUnreclaim: 676244 kB' 'KernelStack: 24704 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8821640 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329716 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.259 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 20:20:33 -- setup/common.sh@33 -- # echo 0 00:03:15.260 20:20:33 -- setup/common.sh@33 -- # return 0 00:03:15.260 20:20:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.260 20:20:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.260 nr_hugepages=1024 00:03:15.260 20:20:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.260 resv_hugepages=0 00:03:15.260 20:20:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.260 surplus_hugepages=0 00:03:15.260 20:20:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.260 anon_hugepages=0 00:03:15.260 20:20:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.260 20:20:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.260 20:20:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.260 20:20:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.260 20:20:33 -- setup/common.sh@18 -- # local node= 00:03:15.260 20:20:33 -- setup/common.sh@19 -- # local var val 00:03:15.260 20:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.260 20:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.260 20:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.260 20:20:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.260 20:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.260 20:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558468 kB' 'MemFree: 240152756 kB' 'MemAvailable: 244308148 kB' 'Buffers: 2696 kB' 'Cached: 11823548 kB' 'SwapCached: 0 kB' 'Active: 7743332 kB' 'Inactive: 4736864 kB' 'Active(anon): 7171812 kB' 'Inactive(anon): 0 kB' 'Active(file): 571520 kB' 'Inactive(file): 4736864 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663144 kB' 'Mapped: 207720 kB' 'Shmem: 6517860 kB' 'KReclaimable: 652116 kB' 'Slab: 1328360 kB' 'SReclaimable: 652116 kB' 'SUnreclaim: 676244 kB' 'KernelStack: 24688 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619260 kB' 'Committed_AS: 8821656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329716 kB' 'VmallocChunk: 0 kB' 'Percpu: 121344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3084352 kB' 'DirectMap2M: 21858304 kB' 'DirectMap1G: 245366784 kB' 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.260 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 20:20:33 -- setup/common.sh@33 -- # echo 1024 00:03:15.261 20:20:33 -- setup/common.sh@33 -- # return 0 00:03:15.261 20:20:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.261 20:20:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.261 20:20:33 -- setup/hugepages.sh@27 -- # local node 00:03:15.261 20:20:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.261 20:20:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.261 20:20:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.261 20:20:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.261 20:20:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.261 20:20:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.261 20:20:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.261 20:20:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.261 20:20:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.261 20:20:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.261 20:20:33 -- setup/common.sh@18 -- # local node=0 00:03:15.261 20:20:33 -- setup/common.sh@19 -- # local var val 00:03:15.261 20:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.261 20:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.261 20:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.261 20:20:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.261 20:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.261 20:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 125306596 kB' 'MemUsed: 6509632 kB' 'SwapCached: 0 kB' 'Active: 1773920 kB' 'Inactive: 344076 kB' 'Active(anon): 1570028 kB' 'Inactive(anon): 0 kB' 'Active(file): 203892 kB' 'Inactive(file): 344076 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1848952 kB' 'Mapped: 95904 kB' 'AnonPages: 278128 kB' 'Shmem: 1300984 kB' 'KernelStack: 14024 kB' 'PageTables: 5068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334752 kB' 'Slab: 696784 kB' 'SReclaimable: 334752 kB' 'SUnreclaim: 362032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.261 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # continue 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 20:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 20:20:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.262 20:20:33 -- setup/common.sh@33 -- # echo 0 00:03:15.262 20:20:33 -- setup/common.sh@33 -- # return 0 00:03:15.262 20:20:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.262 20:20:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.262 20:20:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.262 20:20:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.262 20:20:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.262 node0=1024 expecting 1024 00:03:15.262 20:20:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.262 00:03:15.262 real 0m5.967s 00:03:15.262 user 0m2.010s 00:03:15.262 sys 0m3.696s 00:03:15.262 20:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.262 20:20:33 -- common/autotest_common.sh@10 -- # set +x 00:03:15.262 ************************************ 00:03:15.262 END TEST no_shrink_alloc 00:03:15.262 ************************************ 00:03:15.262 20:20:33 -- setup/hugepages.sh@217 -- # clear_hp 00:03:15.262 20:20:33 -- setup/hugepages.sh@37 -- # local node hp 00:03:15.262 20:20:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.262 20:20:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.262 20:20:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:15.262 20:20:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.262 20:20:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:15.262 20:20:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.262 20:20:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.262 20:20:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:15.262 20:20:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.262 20:20:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:15.262 20:20:33 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:15.262 20:20:33 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:15.262 00:03:15.262 real 0m23.112s 00:03:15.262 user 0m7.092s 00:03:15.262 sys 0m12.849s 00:03:15.262 20:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.262 20:20:33 -- common/autotest_common.sh@10 -- # set +x 00:03:15.262 ************************************ 00:03:15.262 END TEST hugepages 00:03:15.262 ************************************ 00:03:15.262 20:20:33 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:15.262 20:20:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.262 20:20:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.262 20:20:33 -- common/autotest_common.sh@10 -- # set +x 00:03:15.262 ************************************ 00:03:15.262 START TEST driver 00:03:15.262 ************************************ 00:03:15.262 20:20:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:15.262 * Looking for test storage... 00:03:15.262 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:15.262 20:20:33 -- setup/driver.sh@68 -- # setup reset 00:03:15.262 20:20:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.262 20:20:33 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.463 20:20:37 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:19.463 20:20:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:19.463 20:20:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:19.463 20:20:37 -- common/autotest_common.sh@10 -- # set +x 00:03:19.463 ************************************ 00:03:19.463 START TEST guess_driver 00:03:19.463 ************************************ 00:03:19.463 20:20:37 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:19.463 20:20:37 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:19.463 20:20:37 -- setup/driver.sh@47 -- # local fail=0 00:03:19.463 20:20:37 -- setup/driver.sh@49 -- # pick_driver 00:03:19.463 20:20:37 -- setup/driver.sh@36 -- # vfio 00:03:19.463 20:20:37 -- setup/driver.sh@21 -- # local iommu_grups 00:03:19.463 20:20:37 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:19.463 20:20:37 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:19.463 20:20:37 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:19.463 20:20:37 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:19.463 20:20:37 -- setup/driver.sh@29 -- # (( 334 > 0 )) 00:03:19.463 20:20:37 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:19.463 20:20:37 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:19.463 20:20:37 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:19.463 20:20:37 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:19.463 20:20:37 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:19.463 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:19.463 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:19.463 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:19.463 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:19.463 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:19.463 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:19.463 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:19.463 20:20:37 -- setup/driver.sh@30 -- # return 0 00:03:19.463 20:20:37 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:19.463 20:20:37 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:19.463 20:20:37 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:19.463 20:20:37 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:19.463 Looking for driver=vfio-pci 00:03:19.463 20:20:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.463 20:20:37 -- setup/driver.sh@45 -- # setup output config 00:03:19.463 20:20:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.463 20:20:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:22.006 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.006 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.006 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.006 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.006 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.006 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.006 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.006 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.006 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.007 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.007 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.007 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.007 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.007 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.007 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.007 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.007 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.007 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.268 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.268 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.268 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.268 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.268 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.268 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.268 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.268 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.268 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.268 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.268 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.268 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.268 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.268 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.268 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.268 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.268 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.268 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.268 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.268 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.268 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.529 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.529 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.529 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.529 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.529 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.529 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.529 20:20:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.529 20:20:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.529 20:20:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:23.913 20:20:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:23.913 20:20:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:23.913 20:20:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.483 20:20:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.483 20:20:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.483 20:20:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.483 20:20:42 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:24.483 20:20:42 -- setup/driver.sh@65 -- # setup reset 00:03:24.483 20:20:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.483 20:20:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.687 00:03:28.687 real 0m9.099s 00:03:28.687 user 0m1.933s 00:03:28.687 sys 0m3.762s 00:03:28.687 20:20:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.687 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:03:28.687 ************************************ 00:03:28.687 END TEST guess_driver 00:03:28.687 ************************************ 00:03:28.687 00:03:28.687 real 0m13.297s 00:03:28.687 user 0m3.000s 00:03:28.687 sys 0m5.903s 00:03:28.687 20:20:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.687 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:03:28.687 ************************************ 00:03:28.687 END TEST driver 00:03:28.687 ************************************ 00:03:28.687 20:20:46 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:28.687 20:20:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:28.687 20:20:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:28.687 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:03:28.687 ************************************ 00:03:28.687 START TEST devices 00:03:28.687 ************************************ 00:03:28.687 20:20:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:28.687 * Looking for test storage... 00:03:28.687 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:28.687 20:20:46 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:28.687 20:20:46 -- setup/devices.sh@192 -- # setup reset 00:03:28.687 20:20:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.687 20:20:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.988 20:20:50 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:31.988 20:20:50 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:31.988 20:20:50 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:31.988 20:20:50 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:31.988 20:20:50 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:31.988 20:20:50 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:31.988 20:20:50 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:31.988 20:20:50 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.988 20:20:50 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:31.988 20:20:50 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:31.988 20:20:50 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:31.988 20:20:50 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:31.988 20:20:50 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:31.988 20:20:50 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:31.988 20:20:50 -- setup/devices.sh@196 -- # blocks=() 00:03:31.988 20:20:50 -- setup/devices.sh@196 -- # declare -a blocks 00:03:31.988 20:20:50 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:31.988 20:20:50 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:31.988 20:20:50 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:31.988 20:20:50 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.988 20:20:50 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:31.988 20:20:50 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:31.988 20:20:50 -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:03:31.988 20:20:50 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:31.988 20:20:50 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:31.988 20:20:50 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:31.988 20:20:50 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:31.988 No valid GPT data, bailing 00:03:31.988 20:20:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.988 20:20:50 -- scripts/common.sh@393 -- # pt= 00:03:31.988 20:20:50 -- scripts/common.sh@394 -- # return 1 00:03:31.988 20:20:50 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:31.988 20:20:50 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:31.988 20:20:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:31.988 20:20:50 -- setup/common.sh@80 -- # echo 2000398934016 00:03:31.988 20:20:50 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:31.988 20:20:50 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:31.988 20:20:50 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:03:31.988 20:20:50 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.988 20:20:50 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:31.988 20:20:50 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:31.988 20:20:50 -- setup/devices.sh@202 -- # pci=0000:ca:00.0 00:03:31.988 20:20:50 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:03:31.988 20:20:50 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:31.988 20:20:50 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:31.988 20:20:50 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:31.988 No valid GPT data, bailing 00:03:31.988 20:20:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:31.988 20:20:50 -- scripts/common.sh@393 -- # pt= 00:03:31.988 20:20:50 -- scripts/common.sh@394 -- # return 1 00:03:31.988 20:20:50 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:31.988 20:20:50 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:31.988 20:20:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:31.988 20:20:50 -- setup/common.sh@80 -- # echo 2000398934016 00:03:31.988 20:20:50 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:31.988 20:20:50 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:31.988 20:20:50 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:ca:00.0 00:03:31.988 20:20:50 -- setup/devices.sh@209 -- # (( 2 > 0 )) 00:03:31.988 20:20:50 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:31.988 20:20:50 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:31.988 20:20:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.988 20:20:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.988 20:20:50 -- common/autotest_common.sh@10 -- # set +x 00:03:31.988 ************************************ 00:03:31.988 START TEST nvme_mount 00:03:31.988 ************************************ 00:03:31.988 20:20:50 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:31.988 20:20:50 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:31.988 20:20:50 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:31.988 20:20:50 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.988 20:20:50 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.988 20:20:50 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:31.988 20:20:50 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:31.988 20:20:50 -- setup/common.sh@40 -- # local part_no=1 00:03:31.988 20:20:50 -- setup/common.sh@41 -- # local size=1073741824 00:03:31.988 20:20:50 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:31.988 20:20:50 -- setup/common.sh@44 -- # parts=() 00:03:31.988 20:20:50 -- setup/common.sh@44 -- # local parts 00:03:31.988 20:20:50 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:31.988 20:20:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.988 20:20:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.988 20:20:50 -- setup/common.sh@46 -- # (( part++ )) 00:03:31.988 20:20:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.988 20:20:50 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:31.988 20:20:50 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:31.988 20:20:50 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:32.929 Creating new GPT entries in memory. 00:03:32.929 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.929 other utilities. 00:03:32.929 20:20:51 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.929 20:20:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.929 20:20:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.929 20:20:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.929 20:20:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:33.871 Creating new GPT entries in memory. 00:03:33.872 The operation has completed successfully. 00:03:33.872 20:20:52 -- setup/common.sh@57 -- # (( part++ )) 00:03:33.872 20:20:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.872 20:20:52 -- setup/common.sh@62 -- # wait 3293453 00:03:33.872 20:20:52 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.872 20:20:52 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:33.872 20:20:52 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.872 20:20:52 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:33.872 20:20:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:34.144 20:20:52 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.145 20:20:52 -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.145 20:20:52 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:34.145 20:20:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:34.145 20:20:52 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.145 20:20:52 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.145 20:20:52 -- setup/devices.sh@53 -- # local found=0 00:03:34.145 20:20:52 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.145 20:20:52 -- setup/devices.sh@56 -- # : 00:03:34.145 20:20:52 -- setup/devices.sh@59 -- # local pci status 00:03:34.145 20:20:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.145 20:20:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:34.145 20:20:52 -- setup/devices.sh@47 -- # setup output config 00:03:34.145 20:20:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.145 20:20:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:36.694 20:20:54 -- setup/devices.sh@63 -- # found=1 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:36.694 20:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.694 20:20:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.694 20:20:54 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:36.694 20:20:54 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.694 20:20:55 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.694 20:20:55 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.694 20:20:55 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:36.694 20:20:55 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.694 20:20:55 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.694 20:20:55 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.694 20:20:55 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:36.694 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:36.694 20:20:55 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.694 20:20:55 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:36.955 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:36.955 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:03:36.955 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:36.955 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:36.955 20:20:55 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:36.955 20:20:55 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:36.955 20:20:55 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.215 20:20:55 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:37.215 20:20:55 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:37.215 20:20:55 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.215 20:20:55 -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.215 20:20:55 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:37.215 20:20:55 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:37.215 20:20:55 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.215 20:20:55 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.215 20:20:55 -- setup/devices.sh@53 -- # local found=0 00:03:37.215 20:20:55 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:37.215 20:20:55 -- setup/devices.sh@56 -- # : 00:03:37.215 20:20:55 -- setup/devices.sh@59 -- # local pci status 00:03:37.215 20:20:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.215 20:20:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:37.215 20:20:55 -- setup/devices.sh@47 -- # setup output config 00:03:37.215 20:20:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.215 20:20:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:39.868 20:20:57 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:39.868 20:20:57 -- setup/devices.sh@63 -- # found=1 00:03:39.868 20:20:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:57 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.868 20:20:58 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:39.868 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.128 20:20:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.128 20:20:58 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:40.128 20:20:58 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.128 20:20:58 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.128 20:20:58 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.128 20:20:58 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.128 20:20:58 -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:03:40.128 20:20:58 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:40.128 20:20:58 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:40.128 20:20:58 -- setup/devices.sh@50 -- # local mount_point= 00:03:40.128 20:20:58 -- setup/devices.sh@51 -- # local test_file= 00:03:40.128 20:20:58 -- setup/devices.sh@53 -- # local found=0 00:03:40.128 20:20:58 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:40.128 20:20:58 -- setup/devices.sh@59 -- # local pci status 00:03:40.128 20:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.128 20:20:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:40.128 20:20:58 -- setup/devices.sh@47 -- # setup output config 00:03:40.128 20:20:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.128 20:20:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:43.428 20:21:01 -- setup/devices.sh@63 -- # found=1 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.428 20:21:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.428 20:21:01 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:43.428 20:21:01 -- setup/devices.sh@68 -- # return 0 00:03:43.428 20:21:01 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:43.428 20:21:01 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.428 20:21:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.428 20:21:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.428 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.428 00:03:43.428 real 0m11.504s 00:03:43.428 user 0m3.002s 00:03:43.428 sys 0m5.727s 00:03:43.428 20:21:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.428 20:21:01 -- common/autotest_common.sh@10 -- # set +x 00:03:43.428 ************************************ 00:03:43.428 END TEST nvme_mount 00:03:43.429 ************************************ 00:03:43.429 20:21:01 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:43.429 20:21:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.429 20:21:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.429 20:21:01 -- common/autotest_common.sh@10 -- # set +x 00:03:43.429 ************************************ 00:03:43.429 START TEST dm_mount 00:03:43.429 ************************************ 00:03:43.429 20:21:01 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:43.429 20:21:01 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:43.429 20:21:01 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:43.429 20:21:01 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:43.429 20:21:01 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:43.429 20:21:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:43.429 20:21:01 -- setup/common.sh@40 -- # local part_no=2 00:03:43.429 20:21:01 -- setup/common.sh@41 -- # local size=1073741824 00:03:43.429 20:21:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:43.429 20:21:01 -- setup/common.sh@44 -- # parts=() 00:03:43.429 20:21:01 -- setup/common.sh@44 -- # local parts 00:03:43.429 20:21:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:43.429 20:21:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.429 20:21:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.429 20:21:01 -- setup/common.sh@46 -- # (( part++ )) 00:03:43.429 20:21:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.429 20:21:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.429 20:21:01 -- setup/common.sh@46 -- # (( part++ )) 00:03:43.429 20:21:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.429 20:21:01 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:43.429 20:21:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:43.429 20:21:01 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:44.372 Creating new GPT entries in memory. 00:03:44.372 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:44.372 other utilities. 00:03:44.372 20:21:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:44.372 20:21:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.372 20:21:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:44.372 20:21:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:44.372 20:21:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:45.758 Creating new GPT entries in memory. 00:03:45.758 The operation has completed successfully. 00:03:45.758 20:21:03 -- setup/common.sh@57 -- # (( part++ )) 00:03:45.758 20:21:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.758 20:21:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.758 20:21:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.758 20:21:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:46.700 The operation has completed successfully. 00:03:46.700 20:21:04 -- setup/common.sh@57 -- # (( part++ )) 00:03:46.700 20:21:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.700 20:21:04 -- setup/common.sh@62 -- # wait 3298718 00:03:46.700 20:21:04 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:46.700 20:21:04 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:46.700 20:21:04 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:46.700 20:21:04 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:46.700 20:21:04 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:46.700 20:21:04 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:46.700 20:21:04 -- setup/devices.sh@161 -- # break 00:03:46.700 20:21:04 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:46.700 20:21:04 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:46.700 20:21:04 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:46.700 20:21:04 -- setup/devices.sh@166 -- # dm=dm-0 00:03:46.700 20:21:04 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:46.700 20:21:04 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:46.700 20:21:04 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:46.700 20:21:04 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:03:46.700 20:21:04 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:46.700 20:21:04 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:46.700 20:21:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:46.700 20:21:04 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:46.700 20:21:04 -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:46.700 20:21:04 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:46.700 20:21:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:46.700 20:21:04 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:46.700 20:21:04 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:46.700 20:21:04 -- setup/devices.sh@53 -- # local found=0 00:03:46.700 20:21:04 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:46.700 20:21:04 -- setup/devices.sh@56 -- # : 00:03:46.700 20:21:04 -- setup/devices.sh@59 -- # local pci status 00:03:46.700 20:21:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.700 20:21:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:46.700 20:21:04 -- setup/devices.sh@47 -- # setup output config 00:03:46.700 20:21:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.701 20:21:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:49.242 20:21:07 -- setup/devices.sh@63 -- # found=1 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.242 20:21:07 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:49.242 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.502 20:21:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.502 20:21:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:49.502 20:21:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:49.502 20:21:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.502 20:21:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.502 20:21:07 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:49.502 20:21:07 -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:49.502 20:21:07 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:49.502 20:21:07 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:49.502 20:21:07 -- setup/devices.sh@50 -- # local mount_point= 00:03:49.502 20:21:07 -- setup/devices.sh@51 -- # local test_file= 00:03:49.502 20:21:07 -- setup/devices.sh@53 -- # local found=0 00:03:49.502 20:21:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:49.502 20:21:07 -- setup/devices.sh@59 -- # local pci status 00:03:49.502 20:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.502 20:21:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:49.502 20:21:07 -- setup/devices.sh@47 -- # setup output config 00:03:49.502 20:21:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.502 20:21:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:52.044 20:21:09 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:52.044 20:21:09 -- setup/devices.sh@63 -- # found=1 00:03:52.044 20:21:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:09 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.044 20:21:10 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:52.044 20:21:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.304 20:21:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.304 20:21:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:52.304 20:21:10 -- setup/devices.sh@68 -- # return 0 00:03:52.304 20:21:10 -- setup/devices.sh@187 -- # cleanup_dm 00:03:52.304 20:21:10 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:52.305 20:21:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.305 20:21:10 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:52.305 20:21:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.305 20:21:10 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:52.305 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.305 20:21:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.305 20:21:10 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:52.305 00:03:52.305 real 0m8.784s 00:03:52.305 user 0m1.857s 00:03:52.305 sys 0m3.526s 00:03:52.305 20:21:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.305 20:21:10 -- common/autotest_common.sh@10 -- # set +x 00:03:52.305 ************************************ 00:03:52.305 END TEST dm_mount 00:03:52.305 ************************************ 00:03:52.305 20:21:10 -- setup/devices.sh@1 -- # cleanup 00:03:52.305 20:21:10 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:52.305 20:21:10 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.305 20:21:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.305 20:21:10 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.305 20:21:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.305 20:21:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.564 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:52.564 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:03:52.564 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:52.564 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:52.564 20:21:10 -- setup/devices.sh@12 -- # cleanup_dm 00:03:52.564 20:21:10 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:52.564 20:21:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.564 20:21:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.564 20:21:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.564 20:21:10 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.564 20:21:10 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:52.564 00:03:52.564 real 0m24.091s 00:03:52.564 user 0m5.978s 00:03:52.564 sys 0m11.588s 00:03:52.564 20:21:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.564 20:21:10 -- common/autotest_common.sh@10 -- # set +x 00:03:52.565 ************************************ 00:03:52.565 END TEST devices 00:03:52.565 ************************************ 00:03:52.565 00:03:52.565 real 1m24.037s 00:03:52.565 user 0m22.203s 00:03:52.565 sys 0m42.084s 00:03:52.565 20:21:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.565 20:21:10 -- common/autotest_common.sh@10 -- # set +x 00:03:52.565 ************************************ 00:03:52.565 END TEST setup.sh 00:03:52.565 ************************************ 00:03:52.565 20:21:10 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:03:55.121 Hugepages 00:03:55.121 node hugesize free / total 00:03:55.121 node0 1048576kB 0 / 0 00:03:55.121 node0 2048kB 2048 / 2048 00:03:55.382 node1 1048576kB 0 / 0 00:03:55.382 node1 2048kB 0 / 0 00:03:55.382 00:03:55.382 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.382 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:03:55.382 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:03:55.382 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:03:55.382 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:03:55.382 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:03:55.382 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:03:55.382 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:03:55.382 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:03:55.382 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:55.382 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:03:55.382 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:03:55.382 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:03:55.382 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:03:55.382 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:03:55.382 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:03:55.382 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:03:55.382 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:03:55.382 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:03:55.382 20:21:13 -- spdk/autotest.sh@141 -- # uname -s 00:03:55.382 20:21:13 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:55.382 20:21:13 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:55.382 20:21:13 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:58.681 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:58.681 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:58.681 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:58.681 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:58.681 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:58.681 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:58.681 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:58.681 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:58.681 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:58.681 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:58.681 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:58.681 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:58.681 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:58.681 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:58.940 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:58.940 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:04:00.324 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.585 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.845 20:21:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:01.786 20:21:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:01.786 20:21:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:01.786 20:21:20 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.786 20:21:20 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:01.786 20:21:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.786 20:21:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.786 20:21:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.786 20:21:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.786 20:21:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.047 20:21:20 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:02.047 20:21:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:04:02.047 20:21:20 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.586 Waiting for block devices as requested 00:04:04.586 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:04:04.847 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:04.847 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:04.847 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:05.108 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:04:05.108 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:05.108 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:04:05.108 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:05.369 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:04:05.369 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:05.369 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:04:05.369 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:04:05.628 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:04:05.628 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:04:05.628 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:05.888 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:04:05.888 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:05.888 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:04:06.148 20:21:24 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:06.148 20:21:24 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:04:06.148 20:21:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.148 20:21:24 -- common/autotest_common.sh@1487 -- # grep 0000:c9:00.0/nvme/nvme 00:04:06.148 20:21:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:04:06.148 20:21:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:04:06.148 20:21:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:04:06.148 20:21:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:06.148 20:21:24 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:06.148 20:21:24 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:06.148 20:21:24 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:06.148 20:21:24 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:06.148 20:21:24 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:06.148 20:21:24 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:04:06.148 20:21:24 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:06.148 20:21:24 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:06.148 20:21:24 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:06.148 20:21:24 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:06.148 20:21:24 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:06.148 20:21:24 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:06.148 20:21:24 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:06.149 20:21:24 -- common/autotest_common.sh@1542 -- # continue 00:04:06.149 20:21:24 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:06.149 20:21:24 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:ca:00.0 00:04:06.149 20:21:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.149 20:21:24 -- common/autotest_common.sh@1487 -- # grep 0000:ca:00.0/nvme/nvme 00:04:06.149 20:21:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 00:04:06.149 20:21:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 ]] 00:04:06.149 20:21:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 00:04:06.149 20:21:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:06.149 20:21:24 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:06.149 20:21:24 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:06.149 20:21:24 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:06.149 20:21:24 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:06.149 20:21:24 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:06.149 20:21:24 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:04:06.149 20:21:24 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:06.149 20:21:24 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:06.149 20:21:24 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:06.149 20:21:24 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:06.149 20:21:24 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:06.149 20:21:24 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:06.149 20:21:24 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:06.149 20:21:24 -- common/autotest_common.sh@1542 -- # continue 00:04:06.149 20:21:24 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:06.149 20:21:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:06.149 20:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:06.149 20:21:24 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:06.149 20:21:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:06.149 20:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:06.149 20:21:24 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:09.450 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:09.450 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:09.450 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:09.450 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:04:09.450 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:09.450 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:04:09.450 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:09.450 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:04:09.450 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:09.450 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:04:09.450 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:04:09.450 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:04:09.450 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:09.450 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:04:09.450 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:09.450 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:04:10.859 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.121 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.381 20:21:29 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:11.381 20:21:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:11.381 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:04:11.381 20:21:29 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:11.381 20:21:29 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:11.381 20:21:29 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:11.381 20:21:29 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:11.381 20:21:29 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:11.381 20:21:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:11.381 20:21:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:11.381 20:21:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:11.381 20:21:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.381 20:21:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:11.381 20:21:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:11.642 20:21:29 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:11.642 20:21:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:04:11.642 20:21:29 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:11.642 20:21:29 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:04:11.642 20:21:29 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:11.642 20:21:29 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:11.642 20:21:29 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:11.642 20:21:29 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:11.642 20:21:29 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:ca:00.0/device 00:04:11.642 20:21:29 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:11.642 20:21:29 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:11.642 20:21:29 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:11.642 20:21:29 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:04:11.642 20:21:29 -- common/autotest_common.sh@1577 -- # [[ -z 0000:c9:00.0 ]] 00:04:11.642 20:21:29 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3309779 00:04:11.642 20:21:29 -- common/autotest_common.sh@1583 -- # waitforlisten 3309779 00:04:11.642 20:21:29 -- common/autotest_common.sh@819 -- # '[' -z 3309779 ']' 00:04:11.642 20:21:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.642 20:21:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:11.642 20:21:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.642 20:21:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:11.642 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:04:11.642 20:21:29 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.642 [2024-04-26 20:21:29.888920] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:11.642 [2024-04-26 20:21:29.889055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309779 ] 00:04:11.642 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.903 [2024-04-26 20:21:30.026053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.903 [2024-04-26 20:21:30.128080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:11.903 [2024-04-26 20:21:30.128295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.473 20:21:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:12.473 20:21:30 -- common/autotest_common.sh@852 -- # return 0 00:04:12.473 20:21:30 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:12.473 20:21:30 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:12.473 20:21:30 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:c9:00.0 00:04:15.856 nvme0n1 00:04:15.856 20:21:33 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:15.856 [2024-04-26 20:21:33.683829] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:15.856 request: 00:04:15.856 { 00:04:15.856 "nvme_ctrlr_name": "nvme0", 00:04:15.856 "password": "test", 00:04:15.856 "method": "bdev_nvme_opal_revert", 00:04:15.856 "req_id": 1 00:04:15.856 } 00:04:15.856 Got JSON-RPC error response 00:04:15.856 response: 00:04:15.856 { 00:04:15.856 "code": -32602, 00:04:15.856 "message": "Invalid parameters" 00:04:15.856 } 00:04:15.856 20:21:33 -- common/autotest_common.sh@1589 -- # true 00:04:15.856 20:21:33 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:15.856 20:21:33 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:15.856 20:21:33 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:ca:00.0 00:04:18.402 nvme1n1 00:04:18.402 20:21:36 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:04:18.664 [2024-04-26 20:21:36.796458] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:04:18.664 request: 00:04:18.664 { 00:04:18.664 "nvme_ctrlr_name": "nvme1", 00:04:18.664 "password": "test", 00:04:18.664 "method": "bdev_nvme_opal_revert", 00:04:18.664 "req_id": 1 00:04:18.664 } 00:04:18.664 Got JSON-RPC error response 00:04:18.664 response: 00:04:18.664 { 00:04:18.664 "code": -32602, 00:04:18.664 "message": "Invalid parameters" 00:04:18.664 } 00:04:18.664 20:21:36 -- common/autotest_common.sh@1589 -- # true 00:04:18.664 20:21:36 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:18.664 20:21:36 -- common/autotest_common.sh@1593 -- # killprocess 3309779 00:04:18.664 20:21:36 -- common/autotest_common.sh@926 -- # '[' -z 3309779 ']' 00:04:18.664 20:21:36 -- common/autotest_common.sh@930 -- # kill -0 3309779 00:04:18.664 20:21:36 -- common/autotest_common.sh@931 -- # uname 00:04:18.664 20:21:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:18.664 20:21:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3309779 00:04:18.664 20:21:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:18.664 20:21:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:18.664 20:21:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3309779' 00:04:18.664 killing process with pid 3309779 00:04:18.664 20:21:36 -- common/autotest_common.sh@945 -- # kill 3309779 00:04:18.664 20:21:36 -- common/autotest_common.sh@950 -- # wait 3309779 00:04:21.961 20:21:40 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:21.961 20:21:40 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:21.961 20:21:40 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:21.961 20:21:40 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:21.961 20:21:40 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:21.961 20:21:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.961 20:21:40 -- common/autotest_common.sh@10 -- # set +x 00:04:21.961 20:21:40 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:04:21.961 20:21:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.961 20:21:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.961 20:21:40 -- common/autotest_common.sh@10 -- # set +x 00:04:21.961 ************************************ 00:04:21.961 START TEST env 00:04:21.961 ************************************ 00:04:21.961 20:21:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:04:22.221 * Looking for test storage... 00:04:22.221 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:04:22.222 20:21:40 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:04:22.222 20:21:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.222 20:21:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.222 20:21:40 -- common/autotest_common.sh@10 -- # set +x 00:04:22.222 ************************************ 00:04:22.222 START TEST env_memory 00:04:22.222 ************************************ 00:04:22.222 20:21:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:04:22.222 00:04:22.222 00:04:22.222 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.222 http://cunit.sourceforge.net/ 00:04:22.222 00:04:22.222 00:04:22.222 Suite: memory 00:04:22.222 Test: alloc and free memory map ...[2024-04-26 20:21:40.436052] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:22.222 passed 00:04:22.222 Test: mem map translation ...[2024-04-26 20:21:40.483540] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:22.222 [2024-04-26 20:21:40.483579] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:22.222 [2024-04-26 20:21:40.483664] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:22.222 [2024-04-26 20:21:40.483689] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:22.222 passed 00:04:22.483 Test: mem map registration ...[2024-04-26 20:21:40.570504] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:22.483 [2024-04-26 20:21:40.570538] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:22.483 passed 00:04:22.483 Test: mem map adjacent registrations ...passed 00:04:22.483 00:04:22.483 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.483 suites 1 1 n/a 0 0 00:04:22.483 tests 4 4 4 0 0 00:04:22.483 asserts 152 152 152 0 n/a 00:04:22.483 00:04:22.483 Elapsed time = 0.294 seconds 00:04:22.483 00:04:22.483 real 0m0.321s 00:04:22.483 user 0m0.294s 00:04:22.483 sys 0m0.025s 00:04:22.483 20:21:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.483 20:21:40 -- common/autotest_common.sh@10 -- # set +x 00:04:22.483 ************************************ 00:04:22.483 END TEST env_memory 00:04:22.483 ************************************ 00:04:22.483 20:21:40 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:22.483 20:21:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.483 20:21:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.483 20:21:40 -- common/autotest_common.sh@10 -- # set +x 00:04:22.483 ************************************ 00:04:22.483 START TEST env_vtophys 00:04:22.483 ************************************ 00:04:22.483 20:21:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:22.483 EAL: lib.eal log level changed from notice to debug 00:04:22.483 EAL: Detected lcore 0 as core 0 on socket 0 00:04:22.483 EAL: Detected lcore 1 as core 1 on socket 0 00:04:22.483 EAL: Detected lcore 2 as core 2 on socket 0 00:04:22.483 EAL: Detected lcore 3 as core 3 on socket 0 00:04:22.483 EAL: Detected lcore 4 as core 4 on socket 0 00:04:22.483 EAL: Detected lcore 5 as core 5 on socket 0 00:04:22.483 EAL: Detected lcore 6 as core 6 on socket 0 00:04:22.483 EAL: Detected lcore 7 as core 7 on socket 0 00:04:22.483 EAL: Detected lcore 8 as core 8 on socket 0 00:04:22.483 EAL: Detected lcore 9 as core 9 on socket 0 00:04:22.483 EAL: Detected lcore 10 as core 10 on socket 0 00:04:22.483 EAL: Detected lcore 11 as core 11 on socket 0 00:04:22.483 EAL: Detected lcore 12 as core 12 on socket 0 00:04:22.483 EAL: Detected lcore 13 as core 13 on socket 0 00:04:22.483 EAL: Detected lcore 14 as core 14 on socket 0 00:04:22.483 EAL: Detected lcore 15 as core 15 on socket 0 00:04:22.483 EAL: Detected lcore 16 as core 16 on socket 0 00:04:22.483 EAL: Detected lcore 17 as core 17 on socket 0 00:04:22.483 EAL: Detected lcore 18 as core 18 on socket 0 00:04:22.483 EAL: Detected lcore 19 as core 19 on socket 0 00:04:22.483 EAL: Detected lcore 20 as core 20 on socket 0 00:04:22.483 EAL: Detected lcore 21 as core 21 on socket 0 00:04:22.483 EAL: Detected lcore 22 as core 22 on socket 0 00:04:22.483 EAL: Detected lcore 23 as core 23 on socket 0 00:04:22.483 EAL: Detected lcore 24 as core 24 on socket 0 00:04:22.483 EAL: Detected lcore 25 as core 25 on socket 0 00:04:22.483 EAL: Detected lcore 26 as core 26 on socket 0 00:04:22.483 EAL: Detected lcore 27 as core 27 on socket 0 00:04:22.483 EAL: Detected lcore 28 as core 28 on socket 0 00:04:22.483 EAL: Detected lcore 29 as core 29 on socket 0 00:04:22.483 EAL: Detected lcore 30 as core 30 on socket 0 00:04:22.483 EAL: Detected lcore 31 as core 31 on socket 0 00:04:22.483 EAL: Detected lcore 32 as core 0 on socket 1 00:04:22.483 EAL: Detected lcore 33 as core 1 on socket 1 00:04:22.483 EAL: Detected lcore 34 as core 2 on socket 1 00:04:22.483 EAL: Detected lcore 35 as core 3 on socket 1 00:04:22.483 EAL: Detected lcore 36 as core 4 on socket 1 00:04:22.483 EAL: Detected lcore 37 as core 5 on socket 1 00:04:22.483 EAL: Detected lcore 38 as core 6 on socket 1 00:04:22.483 EAL: Detected lcore 39 as core 7 on socket 1 00:04:22.483 EAL: Detected lcore 40 as core 8 on socket 1 00:04:22.483 EAL: Detected lcore 41 as core 9 on socket 1 00:04:22.483 EAL: Detected lcore 42 as core 10 on socket 1 00:04:22.483 EAL: Detected lcore 43 as core 11 on socket 1 00:04:22.483 EAL: Detected lcore 44 as core 12 on socket 1 00:04:22.483 EAL: Detected lcore 45 as core 13 on socket 1 00:04:22.483 EAL: Detected lcore 46 as core 14 on socket 1 00:04:22.483 EAL: Detected lcore 47 as core 15 on socket 1 00:04:22.483 EAL: Detected lcore 48 as core 16 on socket 1 00:04:22.483 EAL: Detected lcore 49 as core 17 on socket 1 00:04:22.483 EAL: Detected lcore 50 as core 18 on socket 1 00:04:22.483 EAL: Detected lcore 51 as core 19 on socket 1 00:04:22.483 EAL: Detected lcore 52 as core 20 on socket 1 00:04:22.483 EAL: Detected lcore 53 as core 21 on socket 1 00:04:22.483 EAL: Detected lcore 54 as core 22 on socket 1 00:04:22.483 EAL: Detected lcore 55 as core 23 on socket 1 00:04:22.483 EAL: Detected lcore 56 as core 24 on socket 1 00:04:22.483 EAL: Detected lcore 57 as core 25 on socket 1 00:04:22.483 EAL: Detected lcore 58 as core 26 on socket 1 00:04:22.483 EAL: Detected lcore 59 as core 27 on socket 1 00:04:22.483 EAL: Detected lcore 60 as core 28 on socket 1 00:04:22.483 EAL: Detected lcore 61 as core 29 on socket 1 00:04:22.483 EAL: Detected lcore 62 as core 30 on socket 1 00:04:22.483 EAL: Detected lcore 63 as core 31 on socket 1 00:04:22.483 EAL: Detected lcore 64 as core 0 on socket 0 00:04:22.483 EAL: Detected lcore 65 as core 1 on socket 0 00:04:22.483 EAL: Detected lcore 66 as core 2 on socket 0 00:04:22.483 EAL: Detected lcore 67 as core 3 on socket 0 00:04:22.483 EAL: Detected lcore 68 as core 4 on socket 0 00:04:22.483 EAL: Detected lcore 69 as core 5 on socket 0 00:04:22.483 EAL: Detected lcore 70 as core 6 on socket 0 00:04:22.483 EAL: Detected lcore 71 as core 7 on socket 0 00:04:22.483 EAL: Detected lcore 72 as core 8 on socket 0 00:04:22.483 EAL: Detected lcore 73 as core 9 on socket 0 00:04:22.483 EAL: Detected lcore 74 as core 10 on socket 0 00:04:22.483 EAL: Detected lcore 75 as core 11 on socket 0 00:04:22.483 EAL: Detected lcore 76 as core 12 on socket 0 00:04:22.483 EAL: Detected lcore 77 as core 13 on socket 0 00:04:22.483 EAL: Detected lcore 78 as core 14 on socket 0 00:04:22.483 EAL: Detected lcore 79 as core 15 on socket 0 00:04:22.483 EAL: Detected lcore 80 as core 16 on socket 0 00:04:22.483 EAL: Detected lcore 81 as core 17 on socket 0 00:04:22.483 EAL: Detected lcore 82 as core 18 on socket 0 00:04:22.483 EAL: Detected lcore 83 as core 19 on socket 0 00:04:22.483 EAL: Detected lcore 84 as core 20 on socket 0 00:04:22.483 EAL: Detected lcore 85 as core 21 on socket 0 00:04:22.483 EAL: Detected lcore 86 as core 22 on socket 0 00:04:22.483 EAL: Detected lcore 87 as core 23 on socket 0 00:04:22.483 EAL: Detected lcore 88 as core 24 on socket 0 00:04:22.483 EAL: Detected lcore 89 as core 25 on socket 0 00:04:22.483 EAL: Detected lcore 90 as core 26 on socket 0 00:04:22.483 EAL: Detected lcore 91 as core 27 on socket 0 00:04:22.483 EAL: Detected lcore 92 as core 28 on socket 0 00:04:22.483 EAL: Detected lcore 93 as core 29 on socket 0 00:04:22.483 EAL: Detected lcore 94 as core 30 on socket 0 00:04:22.483 EAL: Detected lcore 95 as core 31 on socket 0 00:04:22.483 EAL: Detected lcore 96 as core 0 on socket 1 00:04:22.483 EAL: Detected lcore 97 as core 1 on socket 1 00:04:22.483 EAL: Detected lcore 98 as core 2 on socket 1 00:04:22.483 EAL: Detected lcore 99 as core 3 on socket 1 00:04:22.483 EAL: Detected lcore 100 as core 4 on socket 1 00:04:22.483 EAL: Detected lcore 101 as core 5 on socket 1 00:04:22.483 EAL: Detected lcore 102 as core 6 on socket 1 00:04:22.483 EAL: Detected lcore 103 as core 7 on socket 1 00:04:22.483 EAL: Detected lcore 104 as core 8 on socket 1 00:04:22.483 EAL: Detected lcore 105 as core 9 on socket 1 00:04:22.483 EAL: Detected lcore 106 as core 10 on socket 1 00:04:22.483 EAL: Detected lcore 107 as core 11 on socket 1 00:04:22.483 EAL: Detected lcore 108 as core 12 on socket 1 00:04:22.483 EAL: Detected lcore 109 as core 13 on socket 1 00:04:22.483 EAL: Detected lcore 110 as core 14 on socket 1 00:04:22.483 EAL: Detected lcore 111 as core 15 on socket 1 00:04:22.483 EAL: Detected lcore 112 as core 16 on socket 1 00:04:22.483 EAL: Detected lcore 113 as core 17 on socket 1 00:04:22.483 EAL: Detected lcore 114 as core 18 on socket 1 00:04:22.483 EAL: Detected lcore 115 as core 19 on socket 1 00:04:22.483 EAL: Detected lcore 116 as core 20 on socket 1 00:04:22.483 EAL: Detected lcore 117 as core 21 on socket 1 00:04:22.483 EAL: Detected lcore 118 as core 22 on socket 1 00:04:22.483 EAL: Detected lcore 119 as core 23 on socket 1 00:04:22.483 EAL: Detected lcore 120 as core 24 on socket 1 00:04:22.483 EAL: Detected lcore 121 as core 25 on socket 1 00:04:22.483 EAL: Detected lcore 122 as core 26 on socket 1 00:04:22.483 EAL: Detected lcore 123 as core 27 on socket 1 00:04:22.483 EAL: Detected lcore 124 as core 28 on socket 1 00:04:22.483 EAL: Detected lcore 125 as core 29 on socket 1 00:04:22.483 EAL: Detected lcore 126 as core 30 on socket 1 00:04:22.483 EAL: Detected lcore 127 as core 31 on socket 1 00:04:22.483 EAL: Maximum logical cores by configuration: 128 00:04:22.483 EAL: Detected CPU lcores: 128 00:04:22.483 EAL: Detected NUMA nodes: 2 00:04:22.483 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:22.483 EAL: Detected shared linkage of DPDK 00:04:22.483 EAL: No shared files mode enabled, IPC will be disabled 00:04:22.483 EAL: Bus pci wants IOVA as 'DC' 00:04:22.483 EAL: Buses did not request a specific IOVA mode. 00:04:22.483 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:22.483 EAL: Selected IOVA mode 'VA' 00:04:22.483 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.483 EAL: Probing VFIO support... 00:04:22.483 EAL: IOMMU type 1 (Type 1) is supported 00:04:22.483 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:22.483 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:22.483 EAL: VFIO support initialized 00:04:22.483 EAL: Ask a virtual area of 0x2e000 bytes 00:04:22.484 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:22.484 EAL: Setting up physically contiguous memory... 00:04:22.484 EAL: Setting maximum number of open files to 524288 00:04:22.484 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:22.484 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:22.484 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:22.484 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.484 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:22.484 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.484 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.484 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:22.484 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:22.484 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.484 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:22.484 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.484 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.484 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:22.484 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:22.484 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.484 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:22.484 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.484 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.484 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:22.484 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:22.484 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.484 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:22.484 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.484 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.484 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:22.484 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:22.484 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:22.484 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.484 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:22.484 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:22.484 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.484 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:22.484 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:22.484 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.484 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:22.484 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:22.484 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.484 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:22.484 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:22.484 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.484 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:22.484 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:22.484 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.484 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:22.484 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:22.484 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.484 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:22.484 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:22.484 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.484 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:22.484 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:22.484 EAL: Hugepages will be freed exactly as allocated. 00:04:22.484 EAL: No shared files mode enabled, IPC is disabled 00:04:22.484 EAL: No shared files mode enabled, IPC is disabled 00:04:22.484 EAL: TSC frequency is ~1900000 KHz 00:04:22.484 EAL: Main lcore 0 is ready (tid=7fcd04531a40;cpuset=[0]) 00:04:22.484 EAL: Trying to obtain current memory policy. 00:04:22.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.484 EAL: Restoring previous memory policy: 0 00:04:22.484 EAL: request: mp_malloc_sync 00:04:22.484 EAL: No shared files mode enabled, IPC is disabled 00:04:22.484 EAL: Heap on socket 0 was expanded by 2MB 00:04:22.484 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:22.745 EAL: Mem event callback 'spdk:(nil)' registered 00:04:22.745 00:04:22.745 00:04:22.745 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.745 http://cunit.sourceforge.net/ 00:04:22.745 00:04:22.745 00:04:22.745 Suite: components_suite 00:04:22.745 Test: vtophys_malloc_test ...passed 00:04:22.745 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:22.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.745 EAL: Restoring previous memory policy: 4 00:04:22.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.745 EAL: request: mp_malloc_sync 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: Heap on socket 0 was expanded by 4MB 00:04:22.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.745 EAL: request: mp_malloc_sync 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: Heap on socket 0 was shrunk by 4MB 00:04:22.745 EAL: Trying to obtain current memory policy. 00:04:22.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.745 EAL: Restoring previous memory policy: 4 00:04:22.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.745 EAL: request: mp_malloc_sync 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: Heap on socket 0 was expanded by 6MB 00:04:22.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.745 EAL: request: mp_malloc_sync 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.005 EAL: Trying to obtain current memory policy. 00:04:23.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.005 EAL: Restoring previous memory policy: 4 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.005 EAL: Trying to obtain current memory policy. 00:04:23.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.005 EAL: Restoring previous memory policy: 4 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.005 EAL: Trying to obtain current memory policy. 00:04:23.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.005 EAL: Restoring previous memory policy: 4 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.005 EAL: Trying to obtain current memory policy. 00:04:23.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.005 EAL: Restoring previous memory policy: 4 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.005 EAL: Trying to obtain current memory policy. 00:04:23.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.005 EAL: Restoring previous memory policy: 4 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was expanded by 130MB 00:04:23.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.005 EAL: request: mp_malloc_sync 00:04:23.005 EAL: No shared files mode enabled, IPC is disabled 00:04:23.005 EAL: Heap on socket 0 was shrunk by 130MB 00:04:23.265 EAL: Trying to obtain current memory policy. 00:04:23.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.265 EAL: Restoring previous memory policy: 4 00:04:23.265 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.265 EAL: request: mp_malloc_sync 00:04:23.265 EAL: No shared files mode enabled, IPC is disabled 00:04:23.265 EAL: Heap on socket 0 was expanded by 258MB 00:04:23.265 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.523 EAL: request: mp_malloc_sync 00:04:23.523 EAL: No shared files mode enabled, IPC is disabled 00:04:23.523 EAL: Heap on socket 0 was shrunk by 258MB 00:04:23.523 EAL: Trying to obtain current memory policy. 00:04:23.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.523 EAL: Restoring previous memory policy: 4 00:04:23.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.523 EAL: request: mp_malloc_sync 00:04:23.523 EAL: No shared files mode enabled, IPC is disabled 00:04:23.523 EAL: Heap on socket 0 was expanded by 514MB 00:04:24.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.091 EAL: request: mp_malloc_sync 00:04:24.091 EAL: No shared files mode enabled, IPC is disabled 00:04:24.091 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.091 EAL: Trying to obtain current memory policy. 00:04:24.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.350 EAL: Restoring previous memory policy: 4 00:04:24.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.350 EAL: request: mp_malloc_sync 00:04:24.350 EAL: No shared files mode enabled, IPC is disabled 00:04:24.350 EAL: Heap on socket 0 was expanded by 1026MB 00:04:24.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.179 EAL: request: mp_malloc_sync 00:04:25.179 EAL: No shared files mode enabled, IPC is disabled 00:04:25.179 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:25.750 passed 00:04:25.750 00:04:25.750 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.750 suites 1 1 n/a 0 0 00:04:25.750 tests 2 2 2 0 0 00:04:25.750 asserts 497 497 497 0 n/a 00:04:25.750 00:04:25.750 Elapsed time = 2.905 seconds 00:04:25.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.750 EAL: request: mp_malloc_sync 00:04:25.750 EAL: No shared files mode enabled, IPC is disabled 00:04:25.750 EAL: Heap on socket 0 was shrunk by 2MB 00:04:25.750 EAL: No shared files mode enabled, IPC is disabled 00:04:25.750 EAL: No shared files mode enabled, IPC is disabled 00:04:25.750 EAL: No shared files mode enabled, IPC is disabled 00:04:25.750 00:04:25.750 real 0m3.088s 00:04:25.750 user 0m2.455s 00:04:25.750 sys 0m0.584s 00:04:25.750 20:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.750 20:21:43 -- common/autotest_common.sh@10 -- # set +x 00:04:25.750 ************************************ 00:04:25.750 END TEST env_vtophys 00:04:25.750 ************************************ 00:04:25.750 20:21:43 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:25.750 20:21:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.750 20:21:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.750 20:21:43 -- common/autotest_common.sh@10 -- # set +x 00:04:25.750 ************************************ 00:04:25.750 START TEST env_pci 00:04:25.750 ************************************ 00:04:25.750 20:21:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:25.750 00:04:25.750 00:04:25.750 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.750 http://cunit.sourceforge.net/ 00:04:25.750 00:04:25.750 00:04:25.750 Suite: pci 00:04:25.750 Test: pci_hook ...[2024-04-26 20:21:43.884353] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3312658 has claimed it 00:04:25.750 EAL: Cannot find device (10000:00:01.0) 00:04:25.750 EAL: Failed to attach device on primary process 00:04:25.750 passed 00:04:25.750 00:04:25.750 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.750 suites 1 1 n/a 0 0 00:04:25.750 tests 1 1 1 0 0 00:04:25.750 asserts 25 25 25 0 n/a 00:04:25.750 00:04:25.750 Elapsed time = 0.052 seconds 00:04:25.750 00:04:25.750 real 0m0.104s 00:04:25.750 user 0m0.036s 00:04:25.750 sys 0m0.068s 00:04:25.750 20:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.750 20:21:43 -- common/autotest_common.sh@10 -- # set +x 00:04:25.750 ************************************ 00:04:25.750 END TEST env_pci 00:04:25.750 ************************************ 00:04:25.750 20:21:43 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:25.750 20:21:43 -- env/env.sh@15 -- # uname 00:04:25.750 20:21:43 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:25.750 20:21:43 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:25.750 20:21:43 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.750 20:21:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:25.750 20:21:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.750 20:21:43 -- common/autotest_common.sh@10 -- # set +x 00:04:25.750 ************************************ 00:04:25.750 START TEST env_dpdk_post_init 00:04:25.750 ************************************ 00:04:25.750 20:21:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.750 EAL: Detected CPU lcores: 128 00:04:25.750 EAL: Detected NUMA nodes: 2 00:04:25.750 EAL: Detected shared linkage of DPDK 00:04:25.750 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.012 EAL: Selected IOVA mode 'VA' 00:04:26.012 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.012 EAL: VFIO support initialized 00:04:26.012 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.012 EAL: Using IOMMU type 1 (Type 1) 00:04:26.012 EAL: Ignore mapping IO port bar(1) 00:04:26.012 EAL: Ignore mapping IO port bar(3) 00:04:26.271 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:04:26.271 EAL: Ignore mapping IO port bar(1) 00:04:26.271 EAL: Ignore mapping IO port bar(3) 00:04:26.531 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:04:26.531 EAL: Ignore mapping IO port bar(1) 00:04:26.531 EAL: Ignore mapping IO port bar(3) 00:04:26.531 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:04:26.791 EAL: Ignore mapping IO port bar(1) 00:04:26.791 EAL: Ignore mapping IO port bar(3) 00:04:26.791 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:04:27.052 EAL: Ignore mapping IO port bar(1) 00:04:27.052 EAL: Ignore mapping IO port bar(3) 00:04:27.052 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:04:27.313 EAL: Ignore mapping IO port bar(1) 00:04:27.313 EAL: Ignore mapping IO port bar(3) 00:04:27.313 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:04:27.574 EAL: Ignore mapping IO port bar(1) 00:04:27.574 EAL: Ignore mapping IO port bar(3) 00:04:27.574 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:04:27.574 EAL: Ignore mapping IO port bar(1) 00:04:27.574 EAL: Ignore mapping IO port bar(3) 00:04:27.835 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:04:28.407 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:c9:00.0 (socket 1) 00:04:29.348 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:ca:00.0 (socket 1) 00:04:29.348 EAL: Ignore mapping IO port bar(1) 00:04:29.348 EAL: Ignore mapping IO port bar(3) 00:04:29.348 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:04:29.608 EAL: Ignore mapping IO port bar(1) 00:04:29.608 EAL: Ignore mapping IO port bar(3) 00:04:29.608 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:04:29.868 EAL: Ignore mapping IO port bar(1) 00:04:29.868 EAL: Ignore mapping IO port bar(3) 00:04:29.868 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:04:30.129 EAL: Ignore mapping IO port bar(1) 00:04:30.129 EAL: Ignore mapping IO port bar(3) 00:04:30.129 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:04:30.390 EAL: Ignore mapping IO port bar(1) 00:04:30.390 EAL: Ignore mapping IO port bar(3) 00:04:30.390 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:04:30.390 EAL: Ignore mapping IO port bar(1) 00:04:30.390 EAL: Ignore mapping IO port bar(3) 00:04:30.651 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:04:30.651 EAL: Ignore mapping IO port bar(1) 00:04:30.651 EAL: Ignore mapping IO port bar(3) 00:04:30.911 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:04:30.911 EAL: Ignore mapping IO port bar(1) 00:04:30.911 EAL: Ignore mapping IO port bar(3) 00:04:30.911 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:04:35.114 EAL: Releasing PCI mapped resource for 0000:ca:00.0 00:04:35.114 EAL: Calling pci_unmap_resource for 0000:ca:00.0 at 0x202001184000 00:04:35.376 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:04:35.376 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x202001180000 00:04:35.637 Starting DPDK initialization... 00:04:35.637 Starting SPDK post initialization... 00:04:35.637 SPDK NVMe probe 00:04:35.637 Attaching to 0000:c9:00.0 00:04:35.637 Attaching to 0000:ca:00.0 00:04:35.637 Attached to 0000:c9:00.0 00:04:35.637 Attached to 0000:ca:00.0 00:04:35.637 Cleaning up... 00:04:37.551 00:04:37.551 real 0m11.529s 00:04:37.551 user 0m4.626s 00:04:37.551 sys 0m0.185s 00:04:37.551 20:21:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.551 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.551 ************************************ 00:04:37.551 END TEST env_dpdk_post_init 00:04:37.551 ************************************ 00:04:37.551 20:21:55 -- env/env.sh@26 -- # uname 00:04:37.551 20:21:55 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:37.551 20:21:55 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.551 20:21:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.551 20:21:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.551 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.551 ************************************ 00:04:37.551 START TEST env_mem_callbacks 00:04:37.551 ************************************ 00:04:37.551 20:21:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.551 EAL: Detected CPU lcores: 128 00:04:37.551 EAL: Detected NUMA nodes: 2 00:04:37.551 EAL: Detected shared linkage of DPDK 00:04:37.551 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.551 EAL: Selected IOVA mode 'VA' 00:04:37.551 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.551 EAL: VFIO support initialized 00:04:37.551 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.551 00:04:37.551 00:04:37.551 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.551 http://cunit.sourceforge.net/ 00:04:37.551 00:04:37.551 00:04:37.551 Suite: memory 00:04:37.551 Test: test ... 00:04:37.551 register 0x200000200000 2097152 00:04:37.551 malloc 3145728 00:04:37.551 register 0x200000400000 4194304 00:04:37.551 buf 0x2000004fffc0 len 3145728 PASSED 00:04:37.551 malloc 64 00:04:37.551 buf 0x2000004ffec0 len 64 PASSED 00:04:37.551 malloc 4194304 00:04:37.551 register 0x200000800000 6291456 00:04:37.551 buf 0x2000009fffc0 len 4194304 PASSED 00:04:37.551 free 0x2000004fffc0 3145728 00:04:37.551 free 0x2000004ffec0 64 00:04:37.551 unregister 0x200000400000 4194304 PASSED 00:04:37.551 free 0x2000009fffc0 4194304 00:04:37.551 unregister 0x200000800000 6291456 PASSED 00:04:37.551 malloc 8388608 00:04:37.551 register 0x200000400000 10485760 00:04:37.551 buf 0x2000005fffc0 len 8388608 PASSED 00:04:37.551 free 0x2000005fffc0 8388608 00:04:37.551 unregister 0x200000400000 10485760 PASSED 00:04:37.551 passed 00:04:37.551 00:04:37.551 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.551 suites 1 1 n/a 0 0 00:04:37.551 tests 1 1 1 0 0 00:04:37.551 asserts 15 15 15 0 n/a 00:04:37.551 00:04:37.551 Elapsed time = 0.025 seconds 00:04:37.551 00:04:37.551 real 0m0.156s 00:04:37.551 user 0m0.055s 00:04:37.551 sys 0m0.099s 00:04:37.551 20:21:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.551 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.551 ************************************ 00:04:37.551 END TEST env_mem_callbacks 00:04:37.551 ************************************ 00:04:37.551 00:04:37.551 real 0m15.459s 00:04:37.551 user 0m7.544s 00:04:37.551 sys 0m1.185s 00:04:37.551 20:21:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.551 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.551 ************************************ 00:04:37.551 END TEST env 00:04:37.551 ************************************ 00:04:37.551 20:21:55 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:37.551 20:21:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.551 20:21:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.551 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.551 ************************************ 00:04:37.551 START TEST rpc 00:04:37.551 ************************************ 00:04:37.551 20:21:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:37.551 * Looking for test storage... 00:04:37.551 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:37.551 20:21:55 -- rpc/rpc.sh@65 -- # spdk_pid=3315154 00:04:37.551 20:21:55 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.551 20:21:55 -- rpc/rpc.sh@67 -- # waitforlisten 3315154 00:04:37.551 20:21:55 -- common/autotest_common.sh@819 -- # '[' -z 3315154 ']' 00:04:37.551 20:21:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.551 20:21:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:37.551 20:21:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.551 20:21:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:37.551 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.551 20:21:55 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:37.812 [2024-04-26 20:21:55.963054] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:37.812 [2024-04-26 20:21:55.963197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315154 ] 00:04:37.812 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.812 [2024-04-26 20:21:56.085997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.073 [2024-04-26 20:21:56.182687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:38.073 [2024-04-26 20:21:56.182880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.073 [2024-04-26 20:21:56.182895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3315154' to capture a snapshot of events at runtime. 00:04:38.073 [2024-04-26 20:21:56.182906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3315154 for offline analysis/debug. 00:04:38.073 [2024-04-26 20:21:56.182929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.644 20:21:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:38.644 20:21:56 -- common/autotest_common.sh@852 -- # return 0 00:04:38.644 20:21:56 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:38.644 20:21:56 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:38.644 20:21:56 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:38.644 20:21:56 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:38.644 20:21:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.644 20:21:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.644 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 START TEST rpc_integrity 00:04:38.645 ************************************ 00:04:38.645 20:21:56 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:38.645 20:21:56 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.645 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 20:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.645 20:21:56 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.645 20:21:56 -- rpc/rpc.sh@13 -- # jq length 00:04:38.645 20:21:56 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.645 20:21:56 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.645 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 20:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.645 20:21:56 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:38.645 20:21:56 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.645 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 20:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.645 20:21:56 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.645 { 00:04:38.645 "name": "Malloc0", 00:04:38.645 "aliases": [ 00:04:38.645 "548818f7-a48d-4279-a39f-f4332154b78c" 00:04:38.645 ], 00:04:38.645 "product_name": "Malloc disk", 00:04:38.645 "block_size": 512, 00:04:38.645 "num_blocks": 16384, 00:04:38.645 "uuid": "548818f7-a48d-4279-a39f-f4332154b78c", 00:04:38.645 "assigned_rate_limits": { 00:04:38.645 "rw_ios_per_sec": 0, 00:04:38.645 "rw_mbytes_per_sec": 0, 00:04:38.645 "r_mbytes_per_sec": 0, 00:04:38.645 "w_mbytes_per_sec": 0 00:04:38.645 }, 00:04:38.645 "claimed": false, 00:04:38.645 "zoned": false, 00:04:38.645 "supported_io_types": { 00:04:38.645 "read": true, 00:04:38.645 "write": true, 00:04:38.645 "unmap": true, 00:04:38.645 "write_zeroes": true, 00:04:38.645 "flush": true, 00:04:38.645 "reset": true, 00:04:38.645 "compare": false, 00:04:38.645 "compare_and_write": false, 00:04:38.645 "abort": true, 00:04:38.645 "nvme_admin": false, 00:04:38.645 "nvme_io": false 00:04:38.645 }, 00:04:38.645 "memory_domains": [ 00:04:38.645 { 00:04:38.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.645 "dma_device_type": 2 00:04:38.645 } 00:04:38.645 ], 00:04:38.645 "driver_specific": {} 00:04:38.645 } 00:04:38.645 ]' 00:04:38.645 20:21:56 -- rpc/rpc.sh@17 -- # jq length 00:04:38.645 20:21:56 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.645 20:21:56 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:38.645 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 [2024-04-26 20:21:56.835739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:38.645 [2024-04-26 20:21:56.835789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.645 [2024-04-26 20:21:56.835819] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001fb80 00:04:38.645 [2024-04-26 20:21:56.835829] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.645 [2024-04-26 20:21:56.837704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.645 [2024-04-26 20:21:56.837732] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.645 Passthru0 00:04:38.645 20:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.645 20:21:56 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.645 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 20:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.645 20:21:56 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.645 { 00:04:38.645 "name": "Malloc0", 00:04:38.645 "aliases": [ 00:04:38.645 "548818f7-a48d-4279-a39f-f4332154b78c" 00:04:38.645 ], 00:04:38.645 "product_name": "Malloc disk", 00:04:38.645 "block_size": 512, 00:04:38.645 "num_blocks": 16384, 00:04:38.645 "uuid": "548818f7-a48d-4279-a39f-f4332154b78c", 00:04:38.645 "assigned_rate_limits": { 00:04:38.645 "rw_ios_per_sec": 0, 00:04:38.645 "rw_mbytes_per_sec": 0, 00:04:38.645 "r_mbytes_per_sec": 0, 00:04:38.645 "w_mbytes_per_sec": 0 00:04:38.645 }, 00:04:38.645 "claimed": true, 00:04:38.645 "claim_type": "exclusive_write", 00:04:38.645 "zoned": false, 00:04:38.645 "supported_io_types": { 00:04:38.645 "read": true, 00:04:38.645 "write": true, 00:04:38.645 "unmap": true, 00:04:38.645 "write_zeroes": true, 00:04:38.645 "flush": true, 00:04:38.645 "reset": true, 00:04:38.645 "compare": false, 00:04:38.645 "compare_and_write": false, 00:04:38.645 "abort": true, 00:04:38.645 "nvme_admin": false, 00:04:38.645 "nvme_io": false 00:04:38.645 }, 00:04:38.645 "memory_domains": [ 00:04:38.645 { 00:04:38.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.645 "dma_device_type": 2 00:04:38.645 } 00:04:38.645 ], 00:04:38.645 "driver_specific": {} 00:04:38.645 }, 00:04:38.645 { 00:04:38.645 "name": "Passthru0", 00:04:38.645 "aliases": [ 00:04:38.645 "e9ec29cf-784c-564a-a736-e51f4f6ecd90" 00:04:38.645 ], 00:04:38.645 "product_name": "passthru", 00:04:38.645 "block_size": 512, 00:04:38.645 "num_blocks": 16384, 00:04:38.645 "uuid": "e9ec29cf-784c-564a-a736-e51f4f6ecd90", 00:04:38.645 "assigned_rate_limits": { 00:04:38.645 "rw_ios_per_sec": 0, 00:04:38.645 "rw_mbytes_per_sec": 0, 00:04:38.645 "r_mbytes_per_sec": 0, 00:04:38.645 "w_mbytes_per_sec": 0 00:04:38.645 }, 00:04:38.645 "claimed": false, 00:04:38.645 "zoned": false, 00:04:38.645 "supported_io_types": { 00:04:38.645 "read": true, 00:04:38.645 "write": true, 00:04:38.645 "unmap": true, 00:04:38.645 "write_zeroes": true, 00:04:38.645 "flush": true, 00:04:38.645 "reset": true, 00:04:38.645 "compare": false, 00:04:38.645 "compare_and_write": false, 00:04:38.645 "abort": true, 00:04:38.645 "nvme_admin": false, 00:04:38.645 "nvme_io": false 00:04:38.645 }, 00:04:38.645 "memory_domains": [ 00:04:38.645 { 00:04:38.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.645 "dma_device_type": 2 00:04:38.645 } 00:04:38.645 ], 00:04:38.645 "driver_specific": { 00:04:38.645 "passthru": { 00:04:38.645 "name": "Passthru0", 00:04:38.645 "base_bdev_name": "Malloc0" 00:04:38.645 } 00:04:38.645 } 00:04:38.645 } 00:04:38.645 ]' 00:04:38.645 20:21:56 -- rpc/rpc.sh@21 -- # jq length 00:04:38.645 20:21:56 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.645 20:21:56 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.645 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 20:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.645 20:21:56 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:38.645 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 20:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.645 20:21:56 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.645 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 20:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.645 20:21:56 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.645 20:21:56 -- rpc/rpc.sh@26 -- # jq length 00:04:38.645 20:21:56 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.645 00:04:38.645 real 0m0.237s 00:04:38.645 user 0m0.136s 00:04:38.645 sys 0m0.031s 00:04:38.645 20:21:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.645 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 END TEST rpc_integrity 00:04:38.645 ************************************ 00:04:38.906 20:21:56 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:38.906 20:21:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.906 20:21:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.906 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.906 ************************************ 00:04:38.906 START TEST rpc_plugins 00:04:38.906 ************************************ 00:04:38.906 20:21:56 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:38.906 20:21:56 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:38.906 20:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.906 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.906 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.906 20:21:57 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:38.906 20:21:57 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:38.906 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.906 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.906 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.906 20:21:57 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:38.906 { 00:04:38.906 "name": "Malloc1", 00:04:38.906 "aliases": [ 00:04:38.906 "7381bacc-c02b-472e-a0c0-92ca25729c8e" 00:04:38.906 ], 00:04:38.906 "product_name": "Malloc disk", 00:04:38.906 "block_size": 4096, 00:04:38.906 "num_blocks": 256, 00:04:38.906 "uuid": "7381bacc-c02b-472e-a0c0-92ca25729c8e", 00:04:38.906 "assigned_rate_limits": { 00:04:38.906 "rw_ios_per_sec": 0, 00:04:38.906 "rw_mbytes_per_sec": 0, 00:04:38.906 "r_mbytes_per_sec": 0, 00:04:38.906 "w_mbytes_per_sec": 0 00:04:38.906 }, 00:04:38.906 "claimed": false, 00:04:38.906 "zoned": false, 00:04:38.906 "supported_io_types": { 00:04:38.906 "read": true, 00:04:38.906 "write": true, 00:04:38.906 "unmap": true, 00:04:38.906 "write_zeroes": true, 00:04:38.906 "flush": true, 00:04:38.906 "reset": true, 00:04:38.906 "compare": false, 00:04:38.906 "compare_and_write": false, 00:04:38.906 "abort": true, 00:04:38.906 "nvme_admin": false, 00:04:38.906 "nvme_io": false 00:04:38.906 }, 00:04:38.906 "memory_domains": [ 00:04:38.906 { 00:04:38.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.906 "dma_device_type": 2 00:04:38.906 } 00:04:38.906 ], 00:04:38.906 "driver_specific": {} 00:04:38.906 } 00:04:38.906 ]' 00:04:38.906 20:21:57 -- rpc/rpc.sh@32 -- # jq length 00:04:38.906 20:21:57 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:38.906 20:21:57 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:38.906 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.906 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.906 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.906 20:21:57 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:38.906 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.906 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.906 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.906 20:21:57 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:38.906 20:21:57 -- rpc/rpc.sh@36 -- # jq length 00:04:38.906 20:21:57 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.906 00:04:38.906 real 0m0.116s 00:04:38.906 user 0m0.065s 00:04:38.906 sys 0m0.019s 00:04:38.906 20:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.906 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.906 ************************************ 00:04:38.906 END TEST rpc_plugins 00:04:38.906 ************************************ 00:04:38.906 20:21:57 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.906 20:21:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.906 20:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.906 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.906 ************************************ 00:04:38.906 START TEST rpc_trace_cmd_test 00:04:38.906 ************************************ 00:04:38.906 20:21:57 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:38.906 20:21:57 -- rpc/rpc.sh@40 -- # local info 00:04:38.906 20:21:57 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.906 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.906 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.906 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.906 20:21:57 -- rpc/rpc.sh@42 -- # info='{ 00:04:38.907 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3315154", 00:04:38.907 "tpoint_group_mask": "0x8", 00:04:38.907 "iscsi_conn": { 00:04:38.907 "mask": "0x2", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "scsi": { 00:04:38.907 "mask": "0x4", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "bdev": { 00:04:38.907 "mask": "0x8", 00:04:38.907 "tpoint_mask": "0xffffffffffffffff" 00:04:38.907 }, 00:04:38.907 "nvmf_rdma": { 00:04:38.907 "mask": "0x10", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "nvmf_tcp": { 00:04:38.907 "mask": "0x20", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "ftl": { 00:04:38.907 "mask": "0x40", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "blobfs": { 00:04:38.907 "mask": "0x80", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "dsa": { 00:04:38.907 "mask": "0x200", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "thread": { 00:04:38.907 "mask": "0x400", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "nvme_pcie": { 00:04:38.907 "mask": "0x800", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "iaa": { 00:04:38.907 "mask": "0x1000", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "nvme_tcp": { 00:04:38.907 "mask": "0x2000", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 }, 00:04:38.907 "bdev_nvme": { 00:04:38.907 "mask": "0x4000", 00:04:38.907 "tpoint_mask": "0x0" 00:04:38.907 } 00:04:38.907 }' 00:04:38.907 20:21:57 -- rpc/rpc.sh@43 -- # jq length 00:04:38.907 20:21:57 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:38.907 20:21:57 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.907 20:21:57 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.907 20:21:57 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.168 20:21:57 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.168 20:21:57 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.168 20:21:57 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.168 20:21:57 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.168 20:21:57 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.168 00:04:39.168 real 0m0.190s 00:04:39.168 user 0m0.150s 00:04:39.168 sys 0m0.030s 00:04:39.168 20:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.168 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 ************************************ 00:04:39.168 END TEST rpc_trace_cmd_test 00:04:39.168 ************************************ 00:04:39.168 20:21:57 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.168 20:21:57 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.168 20:21:57 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.168 20:21:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.168 20:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.168 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 ************************************ 00:04:39.168 START TEST rpc_daemon_integrity 00:04:39.168 ************************************ 00:04:39.168 20:21:57 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:39.168 20:21:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.168 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.168 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.168 20:21:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.168 20:21:57 -- rpc/rpc.sh@13 -- # jq length 00:04:39.168 20:21:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.168 20:21:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.168 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.168 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.168 20:21:57 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.168 20:21:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.168 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.168 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.168 20:21:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.168 { 00:04:39.168 "name": "Malloc2", 00:04:39.168 "aliases": [ 00:04:39.168 "2dab21dc-302d-492b-8389-aa60a4a6308e" 00:04:39.168 ], 00:04:39.168 "product_name": "Malloc disk", 00:04:39.168 "block_size": 512, 00:04:39.168 "num_blocks": 16384, 00:04:39.168 "uuid": "2dab21dc-302d-492b-8389-aa60a4a6308e", 00:04:39.168 "assigned_rate_limits": { 00:04:39.168 "rw_ios_per_sec": 0, 00:04:39.168 "rw_mbytes_per_sec": 0, 00:04:39.168 "r_mbytes_per_sec": 0, 00:04:39.168 "w_mbytes_per_sec": 0 00:04:39.168 }, 00:04:39.168 "claimed": false, 00:04:39.168 "zoned": false, 00:04:39.168 "supported_io_types": { 00:04:39.168 "read": true, 00:04:39.168 "write": true, 00:04:39.168 "unmap": true, 00:04:39.168 "write_zeroes": true, 00:04:39.168 "flush": true, 00:04:39.168 "reset": true, 00:04:39.168 "compare": false, 00:04:39.168 "compare_and_write": false, 00:04:39.168 "abort": true, 00:04:39.168 "nvme_admin": false, 00:04:39.168 "nvme_io": false 00:04:39.168 }, 00:04:39.168 "memory_domains": [ 00:04:39.168 { 00:04:39.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.168 "dma_device_type": 2 00:04:39.168 } 00:04:39.168 ], 00:04:39.168 "driver_specific": {} 00:04:39.168 } 00:04:39.168 ]' 00:04:39.168 20:21:57 -- rpc/rpc.sh@17 -- # jq length 00:04:39.168 20:21:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.168 20:21:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.168 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.168 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 [2024-04-26 20:21:57.498346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.168 [2024-04-26 20:21:57.498393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.168 [2024-04-26 20:21:57.498417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020d80 00:04:39.168 [2024-04-26 20:21:57.498427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.168 [2024-04-26 20:21:57.500314] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.168 [2024-04-26 20:21:57.500343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.168 Passthru0 00:04:39.168 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.168 20:21:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.168 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.168 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.430 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.430 20:21:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.430 { 00:04:39.430 "name": "Malloc2", 00:04:39.430 "aliases": [ 00:04:39.430 "2dab21dc-302d-492b-8389-aa60a4a6308e" 00:04:39.430 ], 00:04:39.430 "product_name": "Malloc disk", 00:04:39.430 "block_size": 512, 00:04:39.430 "num_blocks": 16384, 00:04:39.430 "uuid": "2dab21dc-302d-492b-8389-aa60a4a6308e", 00:04:39.430 "assigned_rate_limits": { 00:04:39.430 "rw_ios_per_sec": 0, 00:04:39.430 "rw_mbytes_per_sec": 0, 00:04:39.430 "r_mbytes_per_sec": 0, 00:04:39.430 "w_mbytes_per_sec": 0 00:04:39.430 }, 00:04:39.430 "claimed": true, 00:04:39.430 "claim_type": "exclusive_write", 00:04:39.430 "zoned": false, 00:04:39.430 "supported_io_types": { 00:04:39.430 "read": true, 00:04:39.430 "write": true, 00:04:39.430 "unmap": true, 00:04:39.430 "write_zeroes": true, 00:04:39.430 "flush": true, 00:04:39.430 "reset": true, 00:04:39.430 "compare": false, 00:04:39.430 "compare_and_write": false, 00:04:39.430 "abort": true, 00:04:39.430 "nvme_admin": false, 00:04:39.430 "nvme_io": false 00:04:39.430 }, 00:04:39.430 "memory_domains": [ 00:04:39.430 { 00:04:39.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.430 "dma_device_type": 2 00:04:39.430 } 00:04:39.430 ], 00:04:39.430 "driver_specific": {} 00:04:39.430 }, 00:04:39.430 { 00:04:39.430 "name": "Passthru0", 00:04:39.430 "aliases": [ 00:04:39.430 "e9ad809f-148c-544d-8a9c-f748a23647b2" 00:04:39.430 ], 00:04:39.430 "product_name": "passthru", 00:04:39.430 "block_size": 512, 00:04:39.430 "num_blocks": 16384, 00:04:39.430 "uuid": "e9ad809f-148c-544d-8a9c-f748a23647b2", 00:04:39.430 "assigned_rate_limits": { 00:04:39.430 "rw_ios_per_sec": 0, 00:04:39.430 "rw_mbytes_per_sec": 0, 00:04:39.430 "r_mbytes_per_sec": 0, 00:04:39.430 "w_mbytes_per_sec": 0 00:04:39.430 }, 00:04:39.430 "claimed": false, 00:04:39.430 "zoned": false, 00:04:39.430 "supported_io_types": { 00:04:39.430 "read": true, 00:04:39.430 "write": true, 00:04:39.430 "unmap": true, 00:04:39.430 "write_zeroes": true, 00:04:39.430 "flush": true, 00:04:39.430 "reset": true, 00:04:39.430 "compare": false, 00:04:39.430 "compare_and_write": false, 00:04:39.430 "abort": true, 00:04:39.430 "nvme_admin": false, 00:04:39.430 "nvme_io": false 00:04:39.430 }, 00:04:39.430 "memory_domains": [ 00:04:39.430 { 00:04:39.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.430 "dma_device_type": 2 00:04:39.430 } 00:04:39.430 ], 00:04:39.430 "driver_specific": { 00:04:39.430 "passthru": { 00:04:39.430 "name": "Passthru0", 00:04:39.430 "base_bdev_name": "Malloc2" 00:04:39.430 } 00:04:39.430 } 00:04:39.430 } 00:04:39.430 ]' 00:04:39.430 20:21:57 -- rpc/rpc.sh@21 -- # jq length 00:04:39.430 20:21:57 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.430 20:21:57 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.430 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.430 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.430 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.430 20:21:57 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:39.430 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.430 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.430 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.430 20:21:57 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.430 20:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.430 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.430 20:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.430 20:21:57 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.430 20:21:57 -- rpc/rpc.sh@26 -- # jq length 00:04:39.430 20:21:57 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.430 00:04:39.430 real 0m0.228s 00:04:39.430 user 0m0.127s 00:04:39.430 sys 0m0.034s 00:04:39.431 20:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.431 20:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.431 ************************************ 00:04:39.431 END TEST rpc_daemon_integrity 00:04:39.431 ************************************ 00:04:39.431 20:21:57 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:39.431 20:21:57 -- rpc/rpc.sh@84 -- # killprocess 3315154 00:04:39.431 20:21:57 -- common/autotest_common.sh@926 -- # '[' -z 3315154 ']' 00:04:39.431 20:21:57 -- common/autotest_common.sh@930 -- # kill -0 3315154 00:04:39.431 20:21:57 -- common/autotest_common.sh@931 -- # uname 00:04:39.431 20:21:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:39.431 20:21:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3315154 00:04:39.431 20:21:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:39.431 20:21:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:39.431 20:21:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3315154' 00:04:39.431 killing process with pid 3315154 00:04:39.431 20:21:57 -- common/autotest_common.sh@945 -- # kill 3315154 00:04:39.431 20:21:57 -- common/autotest_common.sh@950 -- # wait 3315154 00:04:40.371 00:04:40.371 real 0m2.777s 00:04:40.371 user 0m3.209s 00:04:40.371 sys 0m0.706s 00:04:40.371 20:21:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.371 20:21:58 -- common/autotest_common.sh@10 -- # set +x 00:04:40.371 ************************************ 00:04:40.371 END TEST rpc 00:04:40.371 ************************************ 00:04:40.371 20:21:58 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.371 20:21:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.371 20:21:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.371 20:21:58 -- common/autotest_common.sh@10 -- # set +x 00:04:40.371 ************************************ 00:04:40.371 START TEST rpc_client 00:04:40.371 ************************************ 00:04:40.371 20:21:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.371 * Looking for test storage... 00:04:40.371 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:04:40.371 20:21:58 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:40.631 OK 00:04:40.631 20:21:58 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.631 00:04:40.631 real 0m0.118s 00:04:40.631 user 0m0.047s 00:04:40.631 sys 0m0.075s 00:04:40.631 20:21:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.631 20:21:58 -- common/autotest_common.sh@10 -- # set +x 00:04:40.631 ************************************ 00:04:40.631 END TEST rpc_client 00:04:40.631 ************************************ 00:04:40.631 20:21:58 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:40.631 20:21:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.631 20:21:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.631 20:21:58 -- common/autotest_common.sh@10 -- # set +x 00:04:40.631 ************************************ 00:04:40.631 START TEST json_config 00:04:40.631 ************************************ 00:04:40.631 20:21:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:40.631 20:21:58 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:40.631 20:21:58 -- nvmf/common.sh@7 -- # uname -s 00:04:40.631 20:21:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.631 20:21:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.631 20:21:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.631 20:21:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.631 20:21:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.631 20:21:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.631 20:21:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.631 20:21:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.631 20:21:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.631 20:21:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.631 20:21:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:04:40.631 20:21:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:04:40.631 20:21:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.631 20:21:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.631 20:21:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.631 20:21:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:40.631 20:21:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.631 20:21:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.631 20:21:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.631 20:21:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.631 20:21:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.631 20:21:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.631 20:21:58 -- paths/export.sh@5 -- # export PATH 00:04:40.631 20:21:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.631 20:21:58 -- nvmf/common.sh@46 -- # : 0 00:04:40.631 20:21:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:40.631 20:21:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:40.631 20:21:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:40.631 20:21:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.631 20:21:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.631 20:21:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:40.631 20:21:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:40.631 20:21:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:40.631 20:21:58 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:40.631 20:21:58 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:40.631 20:21:58 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:40.631 20:21:58 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.631 20:21:58 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:40.631 20:21:58 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:40.631 20:21:58 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:40.631 20:21:58 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:40.631 20:21:58 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:40.631 20:21:58 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:40.631 20:21:58 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:04:40.631 20:21:58 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:40.631 20:21:58 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:40.631 20:21:58 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.631 20:21:58 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:40.631 INFO: JSON configuration test init 00:04:40.631 20:21:58 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:40.632 20:21:58 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:40.632 20:21:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:40.632 20:21:58 -- common/autotest_common.sh@10 -- # set +x 00:04:40.632 20:21:58 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:40.632 20:21:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:40.632 20:21:58 -- common/autotest_common.sh@10 -- # set +x 00:04:40.632 20:21:58 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:40.632 20:21:58 -- json_config/json_config.sh@98 -- # local app=target 00:04:40.632 20:21:58 -- json_config/json_config.sh@99 -- # shift 00:04:40.632 20:21:58 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:40.632 20:21:58 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:40.632 20:21:58 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:40.632 20:21:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:40.632 20:21:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:40.632 20:21:58 -- json_config/json_config.sh@111 -- # app_pid[$app]=3315882 00:04:40.632 20:21:58 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:40.632 Waiting for target to run... 00:04:40.632 20:21:58 -- json_config/json_config.sh@114 -- # waitforlisten 3315882 /var/tmp/spdk_tgt.sock 00:04:40.632 20:21:58 -- common/autotest_common.sh@819 -- # '[' -z 3315882 ']' 00:04:40.632 20:21:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.632 20:21:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:40.632 20:21:58 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:40.632 20:21:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.632 20:21:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:40.632 20:21:58 -- common/autotest_common.sh@10 -- # set +x 00:04:40.632 [2024-04-26 20:21:58.922008] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:40.632 [2024-04-26 20:21:58.922147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315882 ] 00:04:40.893 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.154 [2024-04-26 20:21:59.241993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.154 [2024-04-26 20:21:59.328099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.154 [2024-04-26 20:21:59.328299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.415 20:21:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:41.415 20:21:59 -- common/autotest_common.sh@852 -- # return 0 00:04:41.415 20:21:59 -- json_config/json_config.sh@115 -- # echo '' 00:04:41.415 00:04:41.415 20:21:59 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:41.415 20:21:59 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:41.415 20:21:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:41.415 20:21:59 -- common/autotest_common.sh@10 -- # set +x 00:04:41.415 20:21:59 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:41.415 20:21:59 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:41.415 20:21:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:41.415 20:21:59 -- common/autotest_common.sh@10 -- # set +x 00:04:41.415 20:21:59 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:41.415 20:21:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:41.415 20:21:59 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:48.027 20:22:05 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:48.027 20:22:05 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:48.027 20:22:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.027 20:22:05 -- common/autotest_common.sh@10 -- # set +x 00:04:48.027 20:22:05 -- json_config/json_config.sh@48 -- # local ret=0 00:04:48.027 20:22:05 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:48.027 20:22:05 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:48.027 20:22:05 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:48.027 20:22:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:48.027 20:22:05 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:48.027 20:22:05 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:48.027 20:22:05 -- json_config/json_config.sh@51 -- # local get_types 00:04:48.027 20:22:05 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:48.027 20:22:05 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:48.027 20:22:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.027 20:22:05 -- common/autotest_common.sh@10 -- # set +x 00:04:48.028 20:22:05 -- json_config/json_config.sh@58 -- # return 0 00:04:48.028 20:22:05 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:48.028 20:22:05 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:48.028 20:22:05 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:48.028 20:22:05 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:48.028 20:22:05 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:48.028 20:22:05 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:48.028 20:22:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.028 20:22:05 -- common/autotest_common.sh@10 -- # set +x 00:04:48.028 20:22:05 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:48.028 20:22:05 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:48.028 20:22:05 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:48.028 20:22:05 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.028 20:22:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.028 MallocForNvmf0 00:04:48.028 20:22:06 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.028 20:22:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.028 MallocForNvmf1 00:04:48.028 20:22:06 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.028 20:22:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.028 [2024-04-26 20:22:06.290710] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.028 20:22:06 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.028 20:22:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.288 20:22:06 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.288 20:22:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.288 20:22:06 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.288 20:22:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.547 20:22:06 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.547 20:22:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.547 [2024-04-26 20:22:06.811197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.547 20:22:06 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:48.547 20:22:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.547 20:22:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.547 20:22:06 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:48.547 20:22:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.547 20:22:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.808 20:22:06 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:48.808 20:22:06 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.808 20:22:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.808 MallocBdevForConfigChangeCheck 00:04:48.808 20:22:07 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:48.808 20:22:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.808 20:22:07 -- common/autotest_common.sh@10 -- # set +x 00:04:48.808 20:22:07 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:48.808 20:22:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.070 20:22:07 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:49.070 INFO: shutting down applications... 00:04:49.070 20:22:07 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:49.070 20:22:07 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:49.070 20:22:07 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:49.070 20:22:07 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:54.357 Calling clear_iscsi_subsystem 00:04:54.357 Calling clear_nvmf_subsystem 00:04:54.357 Calling clear_nbd_subsystem 00:04:54.357 Calling clear_ublk_subsystem 00:04:54.357 Calling clear_vhost_blk_subsystem 00:04:54.357 Calling clear_vhost_scsi_subsystem 00:04:54.357 Calling clear_scheduler_subsystem 00:04:54.357 Calling clear_bdev_subsystem 00:04:54.357 Calling clear_accel_subsystem 00:04:54.357 Calling clear_vmd_subsystem 00:04:54.357 Calling clear_sock_subsystem 00:04:54.357 Calling clear_iobuf_subsystem 00:04:54.357 20:22:12 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:04:54.357 20:22:12 -- json_config/json_config.sh@396 -- # count=100 00:04:54.357 20:22:12 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:54.357 20:22:12 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.357 20:22:12 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:54.357 20:22:12 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:54.357 20:22:12 -- json_config/json_config.sh@398 -- # break 00:04:54.357 20:22:12 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:54.357 20:22:12 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:54.357 20:22:12 -- json_config/json_config.sh@120 -- # local app=target 00:04:54.357 20:22:12 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:54.357 20:22:12 -- json_config/json_config.sh@124 -- # [[ -n 3315882 ]] 00:04:54.357 20:22:12 -- json_config/json_config.sh@127 -- # kill -SIGINT 3315882 00:04:54.357 20:22:12 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:54.357 20:22:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:54.357 20:22:12 -- json_config/json_config.sh@130 -- # kill -0 3315882 00:04:54.357 20:22:12 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:54.617 20:22:12 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:54.617 20:22:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:54.617 20:22:12 -- json_config/json_config.sh@130 -- # kill -0 3315882 00:04:54.617 20:22:12 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:54.617 20:22:12 -- json_config/json_config.sh@132 -- # break 00:04:54.617 20:22:12 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:54.617 20:22:12 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:54.617 SPDK target shutdown done 00:04:54.617 20:22:12 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:54.618 INFO: relaunching applications... 00:04:54.618 20:22:12 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.618 20:22:12 -- json_config/json_config.sh@98 -- # local app=target 00:04:54.618 20:22:12 -- json_config/json_config.sh@99 -- # shift 00:04:54.618 20:22:12 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:54.618 20:22:12 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:54.618 20:22:12 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:54.618 20:22:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:54.618 20:22:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:54.618 20:22:12 -- json_config/json_config.sh@111 -- # app_pid[$app]=3318845 00:04:54.618 20:22:12 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:54.618 Waiting for target to run... 00:04:54.618 20:22:12 -- json_config/json_config.sh@114 -- # waitforlisten 3318845 /var/tmp/spdk_tgt.sock 00:04:54.618 20:22:12 -- common/autotest_common.sh@819 -- # '[' -z 3318845 ']' 00:04:54.618 20:22:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:54.618 20:22:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.618 20:22:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:54.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:54.618 20:22:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.618 20:22:12 -- common/autotest_common.sh@10 -- # set +x 00:04:54.618 20:22:12 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.878 [2024-04-26 20:22:12.972422] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:54.878 [2024-04-26 20:22:12.972581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318845 ] 00:04:54.878 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.139 [2024-04-26 20:22:13.474886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.398 [2024-04-26 20:22:13.564630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.398 [2024-04-26 20:22:13.564846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.983 [2024-04-26 20:22:19.642232] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.983 [2024-04-26 20:22:19.674506] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:01.983 20:22:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:01.984 20:22:20 -- common/autotest_common.sh@852 -- # return 0 00:05:01.984 20:22:20 -- json_config/json_config.sh@115 -- # echo '' 00:05:01.984 00:05:01.984 20:22:20 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:01.984 20:22:20 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:01.984 INFO: Checking if target configuration is the same... 00:05:01.984 20:22:20 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.984 20:22:20 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:01.984 20:22:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.984 + '[' 2 -ne 2 ']' 00:05:01.984 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:01.984 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:05:01.984 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:05:01.984 +++ basename /dev/fd/62 00:05:01.984 ++ mktemp /tmp/62.XXX 00:05:01.984 + tmp_file_1=/tmp/62.RjB 00:05:01.984 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.984 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:01.984 + tmp_file_2=/tmp/spdk_tgt_config.json.d8m 00:05:01.984 + ret=0 00:05:01.984 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:02.245 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:02.245 + diff -u /tmp/62.RjB /tmp/spdk_tgt_config.json.d8m 00:05:02.245 + echo 'INFO: JSON config files are the same' 00:05:02.245 INFO: JSON config files are the same 00:05:02.245 + rm /tmp/62.RjB /tmp/spdk_tgt_config.json.d8m 00:05:02.245 + exit 0 00:05:02.245 20:22:20 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:02.245 20:22:20 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:02.245 INFO: changing configuration and checking if this can be detected... 00:05:02.245 20:22:20 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:02.245 20:22:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:02.245 20:22:20 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.245 20:22:20 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:02.245 20:22:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.245 + '[' 2 -ne 2 ']' 00:05:02.245 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:02.505 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:05:02.505 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:05:02.505 +++ basename /dev/fd/62 00:05:02.505 ++ mktemp /tmp/62.XXX 00:05:02.505 + tmp_file_1=/tmp/62.hjN 00:05:02.505 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.505 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:02.505 + tmp_file_2=/tmp/spdk_tgt_config.json.3IL 00:05:02.505 + ret=0 00:05:02.505 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:02.505 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:02.766 + diff -u /tmp/62.hjN /tmp/spdk_tgt_config.json.3IL 00:05:02.766 + ret=1 00:05:02.766 + echo '=== Start of file: /tmp/62.hjN ===' 00:05:02.766 + cat /tmp/62.hjN 00:05:02.766 + echo '=== End of file: /tmp/62.hjN ===' 00:05:02.766 + echo '' 00:05:02.766 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3IL ===' 00:05:02.766 + cat /tmp/spdk_tgt_config.json.3IL 00:05:02.766 + echo '=== End of file: /tmp/spdk_tgt_config.json.3IL ===' 00:05:02.766 + echo '' 00:05:02.766 + rm /tmp/62.hjN /tmp/spdk_tgt_config.json.3IL 00:05:02.766 + exit 1 00:05:02.766 20:22:20 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:02.766 INFO: configuration change detected. 00:05:02.766 20:22:20 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:02.766 20:22:20 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:02.766 20:22:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:02.766 20:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:02.766 20:22:20 -- json_config/json_config.sh@360 -- # local ret=0 00:05:02.766 20:22:20 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:02.766 20:22:20 -- json_config/json_config.sh@370 -- # [[ -n 3318845 ]] 00:05:02.766 20:22:20 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:02.766 20:22:20 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:02.766 20:22:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:02.766 20:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:02.766 20:22:20 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:02.766 20:22:20 -- json_config/json_config.sh@246 -- # uname -s 00:05:02.766 20:22:20 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:02.766 20:22:20 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:02.766 20:22:20 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:02.766 20:22:20 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:02.766 20:22:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:02.766 20:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:02.766 20:22:20 -- json_config/json_config.sh@376 -- # killprocess 3318845 00:05:02.766 20:22:20 -- common/autotest_common.sh@926 -- # '[' -z 3318845 ']' 00:05:02.766 20:22:20 -- common/autotest_common.sh@930 -- # kill -0 3318845 00:05:02.766 20:22:20 -- common/autotest_common.sh@931 -- # uname 00:05:02.766 20:22:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.766 20:22:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3318845 00:05:02.766 20:22:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:02.766 20:22:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:02.766 20:22:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3318845' 00:05:02.766 killing process with pid 3318845 00:05:02.766 20:22:20 -- common/autotest_common.sh@945 -- # kill 3318845 00:05:02.766 20:22:20 -- common/autotest_common.sh@950 -- # wait 3318845 00:05:06.093 20:22:24 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.093 20:22:24 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:06.093 20:22:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:06.093 20:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:06.093 20:22:24 -- json_config/json_config.sh@381 -- # return 0 00:05:06.093 20:22:24 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:06.093 INFO: Success 00:05:06.093 00:05:06.093 real 0m25.281s 00:05:06.093 user 0m24.623s 00:05:06.093 sys 0m2.296s 00:05:06.093 20:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.093 20:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:06.093 ************************************ 00:05:06.093 END TEST json_config 00:05:06.093 ************************************ 00:05:06.093 20:22:24 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:06.093 20:22:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.093 20:22:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.093 20:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:06.093 ************************************ 00:05:06.093 START TEST json_config_extra_key 00:05:06.093 ************************************ 00:05:06.093 20:22:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.093 20:22:24 -- nvmf/common.sh@7 -- # uname -s 00:05:06.093 20:22:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.093 20:22:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.093 20:22:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.093 20:22:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.093 20:22:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.093 20:22:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.093 20:22:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.093 20:22:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.093 20:22:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.093 20:22:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.093 20:22:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:05:06.093 20:22:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:05:06.093 20:22:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.093 20:22:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.093 20:22:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.093 20:22:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:05:06.093 20:22:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.093 20:22:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.093 20:22:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.093 20:22:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.093 20:22:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.093 20:22:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.093 20:22:24 -- paths/export.sh@5 -- # export PATH 00:05:06.093 20:22:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.093 20:22:24 -- nvmf/common.sh@46 -- # : 0 00:05:06.093 20:22:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:06.093 20:22:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:06.093 20:22:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:06.093 20:22:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.093 20:22:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.093 20:22:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:06.093 20:22:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:06.093 20:22:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:06.093 INFO: launching applications... 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3321098 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:06.093 Waiting for target to run... 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3321098 /var/tmp/spdk_tgt.sock 00:05:06.093 20:22:24 -- common/autotest_common.sh@819 -- # '[' -z 3321098 ']' 00:05:06.093 20:22:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.093 20:22:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:06.093 20:22:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.093 20:22:24 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:05:06.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.093 20:22:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:06.093 20:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:06.093 [2024-04-26 20:22:24.228367] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:06.093 [2024-04-26 20:22:24.228521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321098 ] 00:05:06.093 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.352 [2024-04-26 20:22:24.552121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.352 [2024-04-26 20:22:24.633476] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.352 [2024-04-26 20:22:24.633674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.923 20:22:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.923 20:22:24 -- common/autotest_common.sh@852 -- # return 0 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:06.923 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:06.923 INFO: shutting down applications... 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3321098 ]] 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3321098 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3321098 00:05:06.923 20:22:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:07.184 20:22:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:07.184 20:22:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:07.184 20:22:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3321098 00:05:07.184 20:22:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:07.755 20:22:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:07.755 20:22:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:07.755 20:22:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3321098 00:05:07.755 20:22:26 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:07.755 20:22:26 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:07.755 20:22:26 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:07.755 20:22:26 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:07.755 SPDK target shutdown done 00:05:07.755 20:22:26 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:07.755 Success 00:05:07.755 00:05:07.755 real 0m1.919s 00:05:07.755 user 0m1.761s 00:05:07.755 sys 0m0.446s 00:05:07.755 20:22:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.755 20:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:07.755 ************************************ 00:05:07.755 END TEST json_config_extra_key 00:05:07.755 ************************************ 00:05:07.755 20:22:26 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.755 20:22:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.755 20:22:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.755 20:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:07.755 ************************************ 00:05:07.755 START TEST alias_rpc 00:05:07.755 ************************************ 00:05:07.755 20:22:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.016 * Looking for test storage... 00:05:08.016 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:05:08.016 20:22:26 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.016 20:22:26 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3321536 00:05:08.016 20:22:26 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3321536 00:05:08.016 20:22:26 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.016 20:22:26 -- common/autotest_common.sh@819 -- # '[' -z 3321536 ']' 00:05:08.016 20:22:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.016 20:22:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.016 20:22:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.016 20:22:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.016 20:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:08.016 [2024-04-26 20:22:26.163171] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:08.016 [2024-04-26 20:22:26.163269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321536 ] 00:05:08.016 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.016 [2024-04-26 20:22:26.252896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.016 [2024-04-26 20:22:26.348880] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.016 [2024-04-26 20:22:26.349080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.959 20:22:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:08.959 20:22:26 -- common/autotest_common.sh@852 -- # return 0 00:05:08.959 20:22:26 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:08.959 20:22:27 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3321536 00:05:08.959 20:22:27 -- common/autotest_common.sh@926 -- # '[' -z 3321536 ']' 00:05:08.959 20:22:27 -- common/autotest_common.sh@930 -- # kill -0 3321536 00:05:08.959 20:22:27 -- common/autotest_common.sh@931 -- # uname 00:05:08.959 20:22:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:08.959 20:22:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3321536 00:05:08.959 20:22:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:08.959 20:22:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:08.959 20:22:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3321536' 00:05:08.959 killing process with pid 3321536 00:05:08.959 20:22:27 -- common/autotest_common.sh@945 -- # kill 3321536 00:05:08.959 20:22:27 -- common/autotest_common.sh@950 -- # wait 3321536 00:05:09.902 00:05:09.902 real 0m1.965s 00:05:09.902 user 0m2.018s 00:05:09.902 sys 0m0.401s 00:05:09.902 20:22:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.902 20:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.902 ************************************ 00:05:09.902 END TEST alias_rpc 00:05:09.902 ************************************ 00:05:09.902 20:22:28 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:09.902 20:22:28 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.902 20:22:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.902 20:22:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.902 20:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.902 ************************************ 00:05:09.902 START TEST spdkcli_tcp 00:05:09.902 ************************************ 00:05:09.902 20:22:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.902 * Looking for test storage... 00:05:09.902 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:05:09.902 20:22:28 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:05:09.902 20:22:28 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:09.902 20:22:28 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:05:09.902 20:22:28 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.902 20:22:28 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.902 20:22:28 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.902 20:22:28 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.902 20:22:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:09.902 20:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.902 20:22:28 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3322101 00:05:09.902 20:22:28 -- spdkcli/tcp.sh@27 -- # waitforlisten 3322101 00:05:09.902 20:22:28 -- common/autotest_common.sh@819 -- # '[' -z 3322101 ']' 00:05:09.902 20:22:28 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:09.902 20:22:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.902 20:22:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.902 20:22:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.902 20:22:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.902 20:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.902 [2024-04-26 20:22:28.214376] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:09.902 [2024-04-26 20:22:28.214514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322101 ] 00:05:10.162 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.162 [2024-04-26 20:22:28.330857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.162 [2024-04-26 20:22:28.426458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.162 [2024-04-26 20:22:28.426713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.162 [2024-04-26 20:22:28.426723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.735 20:22:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.735 20:22:28 -- common/autotest_common.sh@852 -- # return 0 00:05:10.735 20:22:28 -- spdkcli/tcp.sh@31 -- # socat_pid=3322138 00:05:10.735 20:22:28 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:10.735 20:22:28 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.996 [ 00:05:10.996 "bdev_malloc_delete", 00:05:10.996 "bdev_malloc_create", 00:05:10.996 "bdev_null_resize", 00:05:10.996 "bdev_null_delete", 00:05:10.996 "bdev_null_create", 00:05:10.996 "bdev_nvme_cuse_unregister", 00:05:10.996 "bdev_nvme_cuse_register", 00:05:10.996 "bdev_opal_new_user", 00:05:10.996 "bdev_opal_set_lock_state", 00:05:10.996 "bdev_opal_delete", 00:05:10.996 "bdev_opal_get_info", 00:05:10.996 "bdev_opal_create", 00:05:10.996 "bdev_nvme_opal_revert", 00:05:10.996 "bdev_nvme_opal_init", 00:05:10.996 "bdev_nvme_send_cmd", 00:05:10.996 "bdev_nvme_get_path_iostat", 00:05:10.996 "bdev_nvme_get_mdns_discovery_info", 00:05:10.996 "bdev_nvme_stop_mdns_discovery", 00:05:10.996 "bdev_nvme_start_mdns_discovery", 00:05:10.996 "bdev_nvme_set_multipath_policy", 00:05:10.996 "bdev_nvme_set_preferred_path", 00:05:10.996 "bdev_nvme_get_io_paths", 00:05:10.996 "bdev_nvme_remove_error_injection", 00:05:10.996 "bdev_nvme_add_error_injection", 00:05:10.996 "bdev_nvme_get_discovery_info", 00:05:10.996 "bdev_nvme_stop_discovery", 00:05:10.996 "bdev_nvme_start_discovery", 00:05:10.996 "bdev_nvme_get_controller_health_info", 00:05:10.996 "bdev_nvme_disable_controller", 00:05:10.996 "bdev_nvme_enable_controller", 00:05:10.996 "bdev_nvme_reset_controller", 00:05:10.996 "bdev_nvme_get_transport_statistics", 00:05:10.996 "bdev_nvme_apply_firmware", 00:05:10.996 "bdev_nvme_detach_controller", 00:05:10.996 "bdev_nvme_get_controllers", 00:05:10.996 "bdev_nvme_attach_controller", 00:05:10.996 "bdev_nvme_set_hotplug", 00:05:10.996 "bdev_nvme_set_options", 00:05:10.996 "bdev_passthru_delete", 00:05:10.996 "bdev_passthru_create", 00:05:10.996 "bdev_lvol_grow_lvstore", 00:05:10.996 "bdev_lvol_get_lvols", 00:05:10.996 "bdev_lvol_get_lvstores", 00:05:10.996 "bdev_lvol_delete", 00:05:10.996 "bdev_lvol_set_read_only", 00:05:10.996 "bdev_lvol_resize", 00:05:10.996 "bdev_lvol_decouple_parent", 00:05:10.996 "bdev_lvol_inflate", 00:05:10.996 "bdev_lvol_rename", 00:05:10.996 "bdev_lvol_clone_bdev", 00:05:10.996 "bdev_lvol_clone", 00:05:10.996 "bdev_lvol_snapshot", 00:05:10.996 "bdev_lvol_create", 00:05:10.996 "bdev_lvol_delete_lvstore", 00:05:10.996 "bdev_lvol_rename_lvstore", 00:05:10.996 "bdev_lvol_create_lvstore", 00:05:10.996 "bdev_raid_set_options", 00:05:10.996 "bdev_raid_remove_base_bdev", 00:05:10.996 "bdev_raid_add_base_bdev", 00:05:10.996 "bdev_raid_delete", 00:05:10.996 "bdev_raid_create", 00:05:10.996 "bdev_raid_get_bdevs", 00:05:10.996 "bdev_error_inject_error", 00:05:10.996 "bdev_error_delete", 00:05:10.996 "bdev_error_create", 00:05:10.996 "bdev_split_delete", 00:05:10.996 "bdev_split_create", 00:05:10.996 "bdev_delay_delete", 00:05:10.996 "bdev_delay_create", 00:05:10.996 "bdev_delay_update_latency", 00:05:10.996 "bdev_zone_block_delete", 00:05:10.996 "bdev_zone_block_create", 00:05:10.996 "blobfs_create", 00:05:10.996 "blobfs_detect", 00:05:10.996 "blobfs_set_cache_size", 00:05:10.996 "bdev_aio_delete", 00:05:10.996 "bdev_aio_rescan", 00:05:10.996 "bdev_aio_create", 00:05:10.996 "bdev_ftl_set_property", 00:05:10.996 "bdev_ftl_get_properties", 00:05:10.996 "bdev_ftl_get_stats", 00:05:10.996 "bdev_ftl_unmap", 00:05:10.996 "bdev_ftl_unload", 00:05:10.996 "bdev_ftl_delete", 00:05:10.996 "bdev_ftl_load", 00:05:10.996 "bdev_ftl_create", 00:05:10.996 "bdev_virtio_attach_controller", 00:05:10.996 "bdev_virtio_scsi_get_devices", 00:05:10.996 "bdev_virtio_detach_controller", 00:05:10.996 "bdev_virtio_blk_set_hotplug", 00:05:10.996 "bdev_iscsi_delete", 00:05:10.996 "bdev_iscsi_create", 00:05:10.996 "bdev_iscsi_set_options", 00:05:10.996 "accel_error_inject_error", 00:05:10.996 "ioat_scan_accel_module", 00:05:10.996 "dsa_scan_accel_module", 00:05:10.996 "iaa_scan_accel_module", 00:05:10.996 "iscsi_set_options", 00:05:10.996 "iscsi_get_auth_groups", 00:05:10.996 "iscsi_auth_group_remove_secret", 00:05:10.996 "iscsi_auth_group_add_secret", 00:05:10.996 "iscsi_delete_auth_group", 00:05:10.996 "iscsi_create_auth_group", 00:05:10.996 "iscsi_set_discovery_auth", 00:05:10.996 "iscsi_get_options", 00:05:10.996 "iscsi_target_node_request_logout", 00:05:10.996 "iscsi_target_node_set_redirect", 00:05:10.996 "iscsi_target_node_set_auth", 00:05:10.996 "iscsi_target_node_add_lun", 00:05:10.996 "iscsi_get_connections", 00:05:10.996 "iscsi_portal_group_set_auth", 00:05:10.996 "iscsi_start_portal_group", 00:05:10.997 "iscsi_delete_portal_group", 00:05:10.997 "iscsi_create_portal_group", 00:05:10.997 "iscsi_get_portal_groups", 00:05:10.997 "iscsi_delete_target_node", 00:05:10.997 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.997 "iscsi_target_node_add_pg_ig_maps", 00:05:10.997 "iscsi_create_target_node", 00:05:10.997 "iscsi_get_target_nodes", 00:05:10.997 "iscsi_delete_initiator_group", 00:05:10.997 "iscsi_initiator_group_remove_initiators", 00:05:10.997 "iscsi_initiator_group_add_initiators", 00:05:10.997 "iscsi_create_initiator_group", 00:05:10.997 "iscsi_get_initiator_groups", 00:05:10.997 "nvmf_set_crdt", 00:05:10.997 "nvmf_set_config", 00:05:10.997 "nvmf_set_max_subsystems", 00:05:10.997 "nvmf_subsystem_get_listeners", 00:05:10.997 "nvmf_subsystem_get_qpairs", 00:05:10.997 "nvmf_subsystem_get_controllers", 00:05:10.997 "nvmf_get_stats", 00:05:10.997 "nvmf_get_transports", 00:05:10.997 "nvmf_create_transport", 00:05:10.997 "nvmf_get_targets", 00:05:10.997 "nvmf_delete_target", 00:05:10.997 "nvmf_create_target", 00:05:10.997 "nvmf_subsystem_allow_any_host", 00:05:10.997 "nvmf_subsystem_remove_host", 00:05:10.997 "nvmf_subsystem_add_host", 00:05:10.997 "nvmf_subsystem_remove_ns", 00:05:10.997 "nvmf_subsystem_add_ns", 00:05:10.997 "nvmf_subsystem_listener_set_ana_state", 00:05:10.997 "nvmf_discovery_get_referrals", 00:05:10.997 "nvmf_discovery_remove_referral", 00:05:10.997 "nvmf_discovery_add_referral", 00:05:10.997 "nvmf_subsystem_remove_listener", 00:05:10.997 "nvmf_subsystem_add_listener", 00:05:10.997 "nvmf_delete_subsystem", 00:05:10.997 "nvmf_create_subsystem", 00:05:10.997 "nvmf_get_subsystems", 00:05:10.997 "env_dpdk_get_mem_stats", 00:05:10.997 "nbd_get_disks", 00:05:10.997 "nbd_stop_disk", 00:05:10.997 "nbd_start_disk", 00:05:10.997 "ublk_recover_disk", 00:05:10.997 "ublk_get_disks", 00:05:10.997 "ublk_stop_disk", 00:05:10.997 "ublk_start_disk", 00:05:10.997 "ublk_destroy_target", 00:05:10.997 "ublk_create_target", 00:05:10.997 "virtio_blk_create_transport", 00:05:10.997 "virtio_blk_get_transports", 00:05:10.997 "vhost_controller_set_coalescing", 00:05:10.997 "vhost_get_controllers", 00:05:10.997 "vhost_delete_controller", 00:05:10.997 "vhost_create_blk_controller", 00:05:10.997 "vhost_scsi_controller_remove_target", 00:05:10.997 "vhost_scsi_controller_add_target", 00:05:10.997 "vhost_start_scsi_controller", 00:05:10.997 "vhost_create_scsi_controller", 00:05:10.997 "thread_set_cpumask", 00:05:10.997 "framework_get_scheduler", 00:05:10.997 "framework_set_scheduler", 00:05:10.997 "framework_get_reactors", 00:05:10.997 "thread_get_io_channels", 00:05:10.997 "thread_get_pollers", 00:05:10.997 "thread_get_stats", 00:05:10.997 "framework_monitor_context_switch", 00:05:10.997 "spdk_kill_instance", 00:05:10.997 "log_enable_timestamps", 00:05:10.997 "log_get_flags", 00:05:10.997 "log_clear_flag", 00:05:10.997 "log_set_flag", 00:05:10.997 "log_get_level", 00:05:10.997 "log_set_level", 00:05:10.997 "log_get_print_level", 00:05:10.997 "log_set_print_level", 00:05:10.997 "framework_enable_cpumask_locks", 00:05:10.997 "framework_disable_cpumask_locks", 00:05:10.997 "framework_wait_init", 00:05:10.997 "framework_start_init", 00:05:10.997 "scsi_get_devices", 00:05:10.997 "bdev_get_histogram", 00:05:10.997 "bdev_enable_histogram", 00:05:10.997 "bdev_set_qos_limit", 00:05:10.997 "bdev_set_qd_sampling_period", 00:05:10.997 "bdev_get_bdevs", 00:05:10.997 "bdev_reset_iostat", 00:05:10.997 "bdev_get_iostat", 00:05:10.997 "bdev_examine", 00:05:10.997 "bdev_wait_for_examine", 00:05:10.997 "bdev_set_options", 00:05:10.997 "notify_get_notifications", 00:05:10.997 "notify_get_types", 00:05:10.997 "accel_get_stats", 00:05:10.997 "accel_set_options", 00:05:10.997 "accel_set_driver", 00:05:10.997 "accel_crypto_key_destroy", 00:05:10.997 "accel_crypto_keys_get", 00:05:10.997 "accel_crypto_key_create", 00:05:10.997 "accel_assign_opc", 00:05:10.997 "accel_get_module_info", 00:05:10.997 "accel_get_opc_assignments", 00:05:10.997 "vmd_rescan", 00:05:10.997 "vmd_remove_device", 00:05:10.997 "vmd_enable", 00:05:10.997 "sock_set_default_impl", 00:05:10.997 "sock_impl_set_options", 00:05:10.997 "sock_impl_get_options", 00:05:10.997 "iobuf_get_stats", 00:05:10.997 "iobuf_set_options", 00:05:10.997 "framework_get_pci_devices", 00:05:10.997 "framework_get_config", 00:05:10.997 "framework_get_subsystems", 00:05:10.997 "trace_get_info", 00:05:10.997 "trace_get_tpoint_group_mask", 00:05:10.997 "trace_disable_tpoint_group", 00:05:10.997 "trace_enable_tpoint_group", 00:05:10.997 "trace_clear_tpoint_mask", 00:05:10.997 "trace_set_tpoint_mask", 00:05:10.997 "spdk_get_version", 00:05:10.997 "rpc_get_methods" 00:05:10.997 ] 00:05:10.997 20:22:29 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.997 20:22:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:10.997 20:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:10.997 20:22:29 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.997 20:22:29 -- spdkcli/tcp.sh@38 -- # killprocess 3322101 00:05:10.997 20:22:29 -- common/autotest_common.sh@926 -- # '[' -z 3322101 ']' 00:05:10.997 20:22:29 -- common/autotest_common.sh@930 -- # kill -0 3322101 00:05:10.997 20:22:29 -- common/autotest_common.sh@931 -- # uname 00:05:10.997 20:22:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:10.997 20:22:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3322101 00:05:10.997 20:22:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:10.997 20:22:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:10.997 20:22:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3322101' 00:05:10.997 killing process with pid 3322101 00:05:10.997 20:22:29 -- common/autotest_common.sh@945 -- # kill 3322101 00:05:10.997 20:22:29 -- common/autotest_common.sh@950 -- # wait 3322101 00:05:11.940 00:05:11.940 real 0m2.082s 00:05:11.940 user 0m3.681s 00:05:11.940 sys 0m0.531s 00:05:11.940 20:22:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.940 20:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.940 ************************************ 00:05:11.940 END TEST spdkcli_tcp 00:05:11.940 ************************************ 00:05:11.940 20:22:30 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.940 20:22:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.940 20:22:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.940 20:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.940 ************************************ 00:05:11.940 START TEST dpdk_mem_utility 00:05:11.940 ************************************ 00:05:11.940 20:22:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.940 * Looking for test storage... 00:05:11.940 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:05:11.941 20:22:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.941 20:22:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3322480 00:05:11.941 20:22:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3322480 00:05:11.941 20:22:30 -- common/autotest_common.sh@819 -- # '[' -z 3322480 ']' 00:05:11.941 20:22:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.941 20:22:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.941 20:22:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.941 20:22:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.941 20:22:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.941 20:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:12.202 [2024-04-26 20:22:30.321613] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:12.202 [2024-04-26 20:22:30.321746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322480 ] 00:05:12.202 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.202 [2024-04-26 20:22:30.438642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.202 [2024-04-26 20:22:30.535831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.202 [2024-04-26 20:22:30.536031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.773 20:22:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.774 20:22:31 -- common/autotest_common.sh@852 -- # return 0 00:05:12.774 20:22:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.774 20:22:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.774 20:22:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:12.774 20:22:31 -- common/autotest_common.sh@10 -- # set +x 00:05:13.035 { 00:05:13.035 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.035 } 00:05:13.035 20:22:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:13.035 20:22:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.035 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:13.035 1 heaps totaling size 820.000000 MiB 00:05:13.035 size: 820.000000 MiB heap id: 0 00:05:13.035 end heaps---------- 00:05:13.035 8 mempools totaling size 598.116089 MiB 00:05:13.035 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.035 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.035 size: 84.521057 MiB name: bdev_io_3322480 00:05:13.035 size: 51.011292 MiB name: evtpool_3322480 00:05:13.035 size: 50.003479 MiB name: msgpool_3322480 00:05:13.035 size: 21.763794 MiB name: PDU_Pool 00:05:13.035 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.035 size: 0.026123 MiB name: Session_Pool 00:05:13.035 end mempools------- 00:05:13.035 6 memzones totaling size 4.142822 MiB 00:05:13.035 size: 1.000366 MiB name: RG_ring_0_3322480 00:05:13.035 size: 1.000366 MiB name: RG_ring_1_3322480 00:05:13.035 size: 1.000366 MiB name: RG_ring_4_3322480 00:05:13.035 size: 1.000366 MiB name: RG_ring_5_3322480 00:05:13.035 size: 0.125366 MiB name: RG_ring_2_3322480 00:05:13.035 size: 0.015991 MiB name: RG_ring_3_3322480 00:05:13.035 end memzones------- 00:05:13.035 20:22:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.035 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:13.035 list of free elements. size: 18.514832 MiB 00:05:13.035 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:13.035 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:13.035 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:13.035 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:13.035 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:13.035 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:13.035 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:13.035 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:13.035 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:13.035 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:13.035 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:13.035 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:13.035 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:13.035 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:13.035 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:13.035 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:13.035 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:13.035 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:13.035 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:13.035 list of standard malloc elements. size: 199.220764 MiB 00:05:13.035 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:13.035 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:13.035 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:13.035 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:13.035 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:13.035 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:13.035 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:13.035 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:13.035 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:13.035 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:13.035 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:13.035 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:13.035 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:13.035 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:13.035 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:13.035 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:13.036 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:13.036 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:13.036 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:13.036 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:13.036 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:13.036 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:13.036 list of memzone associated elements. size: 602.264404 MiB 00:05:13.036 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:13.036 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.036 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:13.036 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.036 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:13.036 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3322480_0 00:05:13.036 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:13.036 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3322480_0 00:05:13.036 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:13.036 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3322480_0 00:05:13.036 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:13.036 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.036 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:13.036 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.036 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:13.036 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3322480 00:05:13.036 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:13.036 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3322480 00:05:13.036 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:13.036 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3322480 00:05:13.036 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:13.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.036 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:13.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.036 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:13.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.036 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:13.036 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.036 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:13.036 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3322480 00:05:13.036 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:13.036 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3322480 00:05:13.036 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:13.036 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3322480 00:05:13.036 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:13.036 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3322480 00:05:13.036 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:13.036 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3322480 00:05:13.036 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:13.036 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.036 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:13.036 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.036 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:13.036 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.036 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:13.036 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3322480 00:05:13.036 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:13.036 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.036 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:13.036 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.036 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:13.036 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3322480 00:05:13.036 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:13.036 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.036 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:13.036 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3322480 00:05:13.036 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:13.036 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3322480 00:05:13.036 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:13.036 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.036 20:22:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.036 20:22:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3322480 00:05:13.036 20:22:31 -- common/autotest_common.sh@926 -- # '[' -z 3322480 ']' 00:05:13.036 20:22:31 -- common/autotest_common.sh@930 -- # kill -0 3322480 00:05:13.036 20:22:31 -- common/autotest_common.sh@931 -- # uname 00:05:13.036 20:22:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:13.036 20:22:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3322480 00:05:13.036 20:22:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:13.036 20:22:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:13.036 20:22:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3322480' 00:05:13.036 killing process with pid 3322480 00:05:13.036 20:22:31 -- common/autotest_common.sh@945 -- # kill 3322480 00:05:13.036 20:22:31 -- common/autotest_common.sh@950 -- # wait 3322480 00:05:13.979 00:05:13.979 real 0m1.915s 00:05:13.979 user 0m1.923s 00:05:13.979 sys 0m0.449s 00:05:13.979 20:22:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.979 20:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.979 ************************************ 00:05:13.980 END TEST dpdk_mem_utility 00:05:13.980 ************************************ 00:05:13.980 20:22:32 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:05:13.980 20:22:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.980 20:22:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.980 20:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.980 ************************************ 00:05:13.980 START TEST event 00:05:13.980 ************************************ 00:05:13.980 20:22:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:05:13.980 * Looking for test storage... 00:05:13.980 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:05:13.980 20:22:32 -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:13.980 20:22:32 -- bdev/nbd_common.sh@6 -- # set -e 00:05:13.980 20:22:32 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.980 20:22:32 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:13.980 20:22:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.980 20:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.980 ************************************ 00:05:13.980 START TEST event_perf 00:05:13.980 ************************************ 00:05:13.980 20:22:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.980 Running I/O for 1 seconds...[2024-04-26 20:22:32.259773] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:13.980 [2024-04-26 20:22:32.259916] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322948 ] 00:05:14.240 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.240 [2024-04-26 20:22:32.396078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.240 [2024-04-26 20:22:32.496604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.240 [2024-04-26 20:22:32.496642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.240 [2024-04-26 20:22:32.496744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.240 [2024-04-26 20:22:32.496755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.622 Running I/O for 1 seconds... 00:05:15.622 lcore 0: 153170 00:05:15.622 lcore 1: 153168 00:05:15.622 lcore 2: 153170 00:05:15.622 lcore 3: 153170 00:05:15.622 done. 00:05:15.622 00:05:15.622 real 0m1.445s 00:05:15.622 user 0m4.263s 00:05:15.622 sys 0m0.170s 00:05:15.622 20:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.622 20:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:15.622 ************************************ 00:05:15.622 END TEST event_perf 00:05:15.622 ************************************ 00:05:15.622 20:22:33 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:15.622 20:22:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:15.622 20:22:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.622 20:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:15.622 ************************************ 00:05:15.622 START TEST event_reactor 00:05:15.622 ************************************ 00:05:15.622 20:22:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:15.622 [2024-04-26 20:22:33.749787] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:15.622 [2024-04-26 20:22:33.749935] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323251 ] 00:05:15.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.622 [2024-04-26 20:22:33.886496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.882 [2024-04-26 20:22:33.983557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.825 test_start 00:05:16.825 oneshot 00:05:16.825 tick 100 00:05:16.825 tick 100 00:05:16.825 tick 250 00:05:16.825 tick 100 00:05:16.825 tick 100 00:05:16.825 tick 100 00:05:16.825 tick 250 00:05:16.825 tick 500 00:05:16.825 tick 100 00:05:16.825 tick 100 00:05:16.825 tick 250 00:05:16.825 tick 100 00:05:16.825 tick 100 00:05:16.825 test_end 00:05:16.825 00:05:16.825 real 0m1.438s 00:05:16.825 user 0m1.273s 00:05:16.825 sys 0m0.157s 00:05:16.825 20:22:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.825 20:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:16.825 ************************************ 00:05:16.825 END TEST event_reactor 00:05:16.825 ************************************ 00:05:17.085 20:22:35 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.085 20:22:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:17.085 20:22:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.085 20:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:17.085 ************************************ 00:05:17.085 START TEST event_reactor_perf 00:05:17.085 ************************************ 00:05:17.085 20:22:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.085 [2024-04-26 20:22:35.233896] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:17.085 [2024-04-26 20:22:35.234035] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323552 ] 00:05:17.085 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.085 [2024-04-26 20:22:35.366721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.346 [2024-04-26 20:22:35.463703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.289 test_start 00:05:18.289 test_end 00:05:18.289 Performance: 427157 events per second 00:05:18.289 00:05:18.289 real 0m1.421s 00:05:18.289 user 0m1.269s 00:05:18.289 sys 0m0.145s 00:05:18.289 20:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.289 20:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:18.289 ************************************ 00:05:18.289 END TEST event_reactor_perf 00:05:18.289 ************************************ 00:05:18.551 20:22:36 -- event/event.sh@49 -- # uname -s 00:05:18.551 20:22:36 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.551 20:22:36 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:18.551 20:22:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.551 20:22:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.551 20:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:18.551 ************************************ 00:05:18.551 START TEST event_scheduler 00:05:18.551 ************************************ 00:05:18.551 20:22:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:18.551 * Looking for test storage... 00:05:18.551 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:05:18.551 20:22:36 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.551 20:22:36 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3323923 00:05:18.551 20:22:36 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.551 20:22:36 -- scheduler/scheduler.sh@37 -- # waitforlisten 3323923 00:05:18.551 20:22:36 -- common/autotest_common.sh@819 -- # '[' -z 3323923 ']' 00:05:18.551 20:22:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.551 20:22:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:18.551 20:22:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.551 20:22:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:18.551 20:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:18.551 20:22:36 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.551 [2024-04-26 20:22:36.793147] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:18.551 [2024-04-26 20:22:36.793275] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323923 ] 00:05:18.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.812 [2024-04-26 20:22:36.916734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.813 [2024-04-26 20:22:37.016927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.813 [2024-04-26 20:22:37.017120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.813 [2024-04-26 20:22:37.017226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.813 [2024-04-26 20:22:37.017236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.384 20:22:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:19.384 20:22:37 -- common/autotest_common.sh@852 -- # return 0 00:05:19.384 20:22:37 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:19.384 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.384 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.384 POWER: Env isn't set yet! 00:05:19.384 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:19.384 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.384 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.384 POWER: Attempting to initialise PSTAT power management... 00:05:19.384 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:19.384 POWER: Initialized successfully for lcore 0 power management 00:05:19.384 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:19.384 POWER: Initialized successfully for lcore 1 power management 00:05:19.384 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:19.384 POWER: Initialized successfully for lcore 2 power management 00:05:19.384 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:19.384 POWER: Initialized successfully for lcore 3 power management 00:05:19.384 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.384 20:22:37 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:19.384 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.384 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 [2024-04-26 20:22:37.778700] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:19.645 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.645 20:22:37 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:19.645 20:22:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.645 20:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.645 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 ************************************ 00:05:19.645 START TEST scheduler_create_thread 00:05:19.645 ************************************ 00:05:19.645 20:22:37 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:19.645 20:22:37 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:19.645 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.645 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 2 00:05:19.645 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.645 20:22:37 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:19.645 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.645 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 3 00:05:19.645 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.645 20:22:37 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:19.645 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.645 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 4 00:05:19.645 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.645 20:22:37 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:19.645 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.645 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 5 00:05:19.645 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.645 20:22:37 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:19.645 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.645 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 6 00:05:19.645 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.645 20:22:37 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:19.645 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.645 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 7 00:05:19.645 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.645 20:22:37 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:19.645 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.645 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 8 00:05:19.645 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.646 20:22:37 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:19.646 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.646 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.646 9 00:05:19.646 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.646 20:22:37 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:19.646 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.646 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.646 10 00:05:19.646 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.646 20:22:37 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:19.646 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.646 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.646 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.646 20:22:37 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:19.646 20:22:37 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:19.646 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.646 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.646 20:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.646 20:22:37 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.646 20:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.646 20:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:20.649 20:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:20.649 20:22:38 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.649 20:22:38 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.649 20:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:20.649 20:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.591 20:22:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:21.592 00:05:21.592 real 0m2.132s 00:05:21.592 user 0m0.018s 00:05:21.592 sys 0m0.005s 00:05:21.592 20:22:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.592 20:22:39 -- common/autotest_common.sh@10 -- # set +x 00:05:21.592 ************************************ 00:05:21.592 END TEST scheduler_create_thread 00:05:21.592 ************************************ 00:05:21.852 20:22:39 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.852 20:22:39 -- scheduler/scheduler.sh@46 -- # killprocess 3323923 00:05:21.852 20:22:39 -- common/autotest_common.sh@926 -- # '[' -z 3323923 ']' 00:05:21.852 20:22:39 -- common/autotest_common.sh@930 -- # kill -0 3323923 00:05:21.852 20:22:39 -- common/autotest_common.sh@931 -- # uname 00:05:21.852 20:22:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:21.852 20:22:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3323923 00:05:21.852 20:22:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:21.852 20:22:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:21.852 20:22:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3323923' 00:05:21.852 killing process with pid 3323923 00:05:21.852 20:22:40 -- common/autotest_common.sh@945 -- # kill 3323923 00:05:21.852 20:22:40 -- common/autotest_common.sh@950 -- # wait 3323923 00:05:22.113 [2024-04-26 20:22:40.399386] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.375 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:22.375 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:22.375 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:22.375 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:22.375 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:22.375 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:22.375 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:22.375 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:22.636 00:05:22.636 real 0m4.174s 00:05:22.636 user 0m7.249s 00:05:22.636 sys 0m0.407s 00:05:22.636 20:22:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.636 20:22:40 -- common/autotest_common.sh@10 -- # set +x 00:05:22.636 ************************************ 00:05:22.636 END TEST event_scheduler 00:05:22.636 ************************************ 00:05:22.636 20:22:40 -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.636 20:22:40 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.636 20:22:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.636 20:22:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.636 20:22:40 -- common/autotest_common.sh@10 -- # set +x 00:05:22.636 ************************************ 00:05:22.636 START TEST app_repeat 00:05:22.636 ************************************ 00:05:22.636 20:22:40 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:22.636 20:22:40 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.636 20:22:40 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.636 20:22:40 -- event/event.sh@13 -- # local nbd_list 00:05:22.636 20:22:40 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.636 20:22:40 -- event/event.sh@14 -- # local bdev_list 00:05:22.636 20:22:40 -- event/event.sh@15 -- # local repeat_times=4 00:05:22.636 20:22:40 -- event/event.sh@17 -- # modprobe nbd 00:05:22.636 20:22:40 -- event/event.sh@19 -- # repeat_pid=3324796 00:05:22.636 20:22:40 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.636 20:22:40 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3324796' 00:05:22.636 Process app_repeat pid: 3324796 00:05:22.636 20:22:40 -- event/event.sh@23 -- # for i in {0..2} 00:05:22.636 20:22:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.636 spdk_app_start Round 0 00:05:22.636 20:22:40 -- event/event.sh@25 -- # waitforlisten 3324796 /var/tmp/spdk-nbd.sock 00:05:22.636 20:22:40 -- common/autotest_common.sh@819 -- # '[' -z 3324796 ']' 00:05:22.636 20:22:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.636 20:22:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.636 20:22:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.636 20:22:40 -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.636 20:22:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.636 20:22:40 -- common/autotest_common.sh@10 -- # set +x 00:05:22.636 [2024-04-26 20:22:40.939347] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:22.636 [2024-04-26 20:22:40.939495] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324796 ] 00:05:22.897 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.897 [2024-04-26 20:22:41.075195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.897 [2024-04-26 20:22:41.172595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.897 [2024-04-26 20:22:41.172602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.464 20:22:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.464 20:22:41 -- common/autotest_common.sh@852 -- # return 0 00:05:23.464 20:22:41 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.724 Malloc0 00:05:23.724 20:22:41 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.724 Malloc1 00:05:23.724 20:22:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.724 20:22:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.725 20:22:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.725 20:22:42 -- bdev/nbd_common.sh@12 -- # local i 00:05:23.725 20:22:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.725 20:22:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.725 20:22:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.984 /dev/nbd0 00:05:23.984 20:22:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.984 20:22:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.985 20:22:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:23.985 20:22:42 -- common/autotest_common.sh@857 -- # local i 00:05:23.985 20:22:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:23.985 20:22:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:23.985 20:22:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:23.985 20:22:42 -- common/autotest_common.sh@861 -- # break 00:05:23.985 20:22:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:23.985 20:22:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:23.985 20:22:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.985 1+0 records in 00:05:23.985 1+0 records out 00:05:23.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254378 s, 16.1 MB/s 00:05:23.985 20:22:42 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:23.985 20:22:42 -- common/autotest_common.sh@874 -- # size=4096 00:05:23.985 20:22:42 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:23.985 20:22:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:23.985 20:22:42 -- common/autotest_common.sh@877 -- # return 0 00:05:23.985 20:22:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.985 20:22:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.985 20:22:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.244 /dev/nbd1 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.244 20:22:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:24.244 20:22:42 -- common/autotest_common.sh@857 -- # local i 00:05:24.244 20:22:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:24.244 20:22:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:24.244 20:22:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:24.244 20:22:42 -- common/autotest_common.sh@861 -- # break 00:05:24.244 20:22:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:24.244 20:22:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:24.244 20:22:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.244 1+0 records in 00:05:24.244 1+0 records out 00:05:24.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163868 s, 25.0 MB/s 00:05:24.244 20:22:42 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:24.244 20:22:42 -- common/autotest_common.sh@874 -- # size=4096 00:05:24.244 20:22:42 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:24.244 20:22:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:24.244 20:22:42 -- common/autotest_common.sh@877 -- # return 0 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.244 { 00:05:24.244 "nbd_device": "/dev/nbd0", 00:05:24.244 "bdev_name": "Malloc0" 00:05:24.244 }, 00:05:24.244 { 00:05:24.244 "nbd_device": "/dev/nbd1", 00:05:24.244 "bdev_name": "Malloc1" 00:05:24.244 } 00:05:24.244 ]' 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.244 { 00:05:24.244 "nbd_device": "/dev/nbd0", 00:05:24.244 "bdev_name": "Malloc0" 00:05:24.244 }, 00:05:24.244 { 00:05:24.244 "nbd_device": "/dev/nbd1", 00:05:24.244 "bdev_name": "Malloc1" 00:05:24.244 } 00:05:24.244 ]' 00:05:24.244 20:22:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.244 /dev/nbd1' 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.503 /dev/nbd1' 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.503 256+0 records in 00:05:24.503 256+0 records out 00:05:24.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454205 s, 231 MB/s 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.503 256+0 records in 00:05:24.503 256+0 records out 00:05:24.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147654 s, 71.0 MB/s 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.503 256+0 records in 00:05:24.503 256+0 records out 00:05:24.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173649 s, 60.4 MB/s 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@51 -- # local i 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.503 20:22:42 -- bdev/nbd_common.sh@41 -- # break 00:05:24.504 20:22:42 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.504 20:22:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.504 20:22:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.761 20:22:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.761 20:22:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@41 -- # break 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.762 20:22:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.027 20:22:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@65 -- # true 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.028 20:22:43 -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.028 20:22:43 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.028 20:22:43 -- event/event.sh@35 -- # sleep 3 00:05:25.602 [2024-04-26 20:22:43.821219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.602 [2024-04-26 20:22:43.906416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.602 [2024-04-26 20:22:43.906422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.862 [2024-04-26 20:22:43.976542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.862 [2024-04-26 20:22:43.976585] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.398 20:22:46 -- event/event.sh@23 -- # for i in {0..2} 00:05:28.398 20:22:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:28.398 spdk_app_start Round 1 00:05:28.398 20:22:46 -- event/event.sh@25 -- # waitforlisten 3324796 /var/tmp/spdk-nbd.sock 00:05:28.398 20:22:46 -- common/autotest_common.sh@819 -- # '[' -z 3324796 ']' 00:05:28.398 20:22:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.398 20:22:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.398 20:22:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.398 20:22:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.398 20:22:46 -- common/autotest_common.sh@10 -- # set +x 00:05:28.398 20:22:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.398 20:22:46 -- common/autotest_common.sh@852 -- # return 0 00:05:28.398 20:22:46 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.398 Malloc0 00:05:28.398 20:22:46 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.656 Malloc1 00:05:28.656 20:22:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@12 -- # local i 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.656 /dev/nbd0 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.656 20:22:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.656 20:22:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:28.656 20:22:46 -- common/autotest_common.sh@857 -- # local i 00:05:28.656 20:22:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:28.656 20:22:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:28.656 20:22:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:28.914 20:22:46 -- common/autotest_common.sh@861 -- # break 00:05:28.914 20:22:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:28.914 20:22:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:28.914 20:22:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.914 1+0 records in 00:05:28.914 1+0 records out 00:05:28.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000127186 s, 32.2 MB/s 00:05:28.914 20:22:47 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:28.914 20:22:47 -- common/autotest_common.sh@874 -- # size=4096 00:05:28.914 20:22:47 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:28.914 20:22:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:28.914 20:22:47 -- common/autotest_common.sh@877 -- # return 0 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.914 /dev/nbd1 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.914 20:22:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:28.914 20:22:47 -- common/autotest_common.sh@857 -- # local i 00:05:28.914 20:22:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:28.914 20:22:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:28.914 20:22:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:28.914 20:22:47 -- common/autotest_common.sh@861 -- # break 00:05:28.914 20:22:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:28.914 20:22:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:28.914 20:22:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.914 1+0 records in 00:05:28.914 1+0 records out 00:05:28.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178297 s, 23.0 MB/s 00:05:28.914 20:22:47 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:28.914 20:22:47 -- common/autotest_common.sh@874 -- # size=4096 00:05:28.914 20:22:47 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:28.914 20:22:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:28.914 20:22:47 -- common/autotest_common.sh@877 -- # return 0 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.914 20:22:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.173 { 00:05:29.173 "nbd_device": "/dev/nbd0", 00:05:29.173 "bdev_name": "Malloc0" 00:05:29.173 }, 00:05:29.173 { 00:05:29.173 "nbd_device": "/dev/nbd1", 00:05:29.173 "bdev_name": "Malloc1" 00:05:29.173 } 00:05:29.173 ]' 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.173 { 00:05:29.173 "nbd_device": "/dev/nbd0", 00:05:29.173 "bdev_name": "Malloc0" 00:05:29.173 }, 00:05:29.173 { 00:05:29.173 "nbd_device": "/dev/nbd1", 00:05:29.173 "bdev_name": "Malloc1" 00:05:29.173 } 00:05:29.173 ]' 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.173 /dev/nbd1' 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.173 /dev/nbd1' 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.173 20:22:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.174 256+0 records in 00:05:29.174 256+0 records out 00:05:29.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552978 s, 190 MB/s 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.174 256+0 records in 00:05:29.174 256+0 records out 00:05:29.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138732 s, 75.6 MB/s 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.174 256+0 records in 00:05:29.174 256+0 records out 00:05:29.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154781 s, 67.7 MB/s 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@51 -- # local i 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.174 20:22:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@41 -- # break 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@41 -- # break 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.434 20:22:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@65 -- # true 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.694 20:22:47 -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.694 20:22:47 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.955 20:22:48 -- event/event.sh@35 -- # sleep 3 00:05:30.528 [2024-04-26 20:22:48.604813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.528 [2024-04-26 20:22:48.691243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.528 [2024-04-26 20:22:48.691247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.528 [2024-04-26 20:22:48.764802] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.528 [2024-04-26 20:22:48.764838] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.074 20:22:51 -- event/event.sh@23 -- # for i in {0..2} 00:05:33.074 20:22:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:33.074 spdk_app_start Round 2 00:05:33.074 20:22:51 -- event/event.sh@25 -- # waitforlisten 3324796 /var/tmp/spdk-nbd.sock 00:05:33.074 20:22:51 -- common/autotest_common.sh@819 -- # '[' -z 3324796 ']' 00:05:33.074 20:22:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.074 20:22:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.074 20:22:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.074 20:22:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.074 20:22:51 -- common/autotest_common.sh@10 -- # set +x 00:05:33.074 20:22:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.074 20:22:51 -- common/autotest_common.sh@852 -- # return 0 00:05:33.074 20:22:51 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.335 Malloc0 00:05:33.335 20:22:51 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.335 Malloc1 00:05:33.335 20:22:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@12 -- # local i 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.335 20:22:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.595 /dev/nbd0 00:05:33.595 20:22:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.595 20:22:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.595 20:22:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:33.595 20:22:51 -- common/autotest_common.sh@857 -- # local i 00:05:33.595 20:22:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:33.595 20:22:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:33.595 20:22:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:33.595 20:22:51 -- common/autotest_common.sh@861 -- # break 00:05:33.595 20:22:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:33.595 20:22:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:33.595 20:22:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.595 1+0 records in 00:05:33.595 1+0 records out 00:05:33.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025327 s, 16.2 MB/s 00:05:33.595 20:22:51 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:33.595 20:22:51 -- common/autotest_common.sh@874 -- # size=4096 00:05:33.595 20:22:51 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:33.595 20:22:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:33.595 20:22:51 -- common/autotest_common.sh@877 -- # return 0 00:05:33.595 20:22:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.595 20:22:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.595 20:22:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.855 /dev/nbd1 00:05:33.855 20:22:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.855 20:22:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.855 20:22:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:33.855 20:22:51 -- common/autotest_common.sh@857 -- # local i 00:05:33.855 20:22:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:33.855 20:22:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:33.855 20:22:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:33.855 20:22:51 -- common/autotest_common.sh@861 -- # break 00:05:33.855 20:22:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:33.855 20:22:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:33.855 20:22:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.855 1+0 records in 00:05:33.855 1+0 records out 00:05:33.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185519 s, 22.1 MB/s 00:05:33.855 20:22:51 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:33.855 20:22:51 -- common/autotest_common.sh@874 -- # size=4096 00:05:33.855 20:22:51 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:33.855 20:22:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:33.855 20:22:51 -- common/autotest_common.sh@877 -- # return 0 00:05:33.855 20:22:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.855 20:22:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.855 20:22:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.855 20:22:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.855 20:22:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.855 { 00:05:33.855 "nbd_device": "/dev/nbd0", 00:05:33.855 "bdev_name": "Malloc0" 00:05:33.855 }, 00:05:33.855 { 00:05:33.855 "nbd_device": "/dev/nbd1", 00:05:33.855 "bdev_name": "Malloc1" 00:05:33.855 } 00:05:33.855 ]' 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.855 { 00:05:33.855 "nbd_device": "/dev/nbd0", 00:05:33.855 "bdev_name": "Malloc0" 00:05:33.855 }, 00:05:33.855 { 00:05:33.855 "nbd_device": "/dev/nbd1", 00:05:33.855 "bdev_name": "Malloc1" 00:05:33.855 } 00:05:33.855 ]' 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.855 /dev/nbd1' 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.855 /dev/nbd1' 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.855 20:22:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.116 256+0 records in 00:05:34.116 256+0 records out 00:05:34.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512342 s, 205 MB/s 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.116 256+0 records in 00:05:34.116 256+0 records out 00:05:34.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152931 s, 68.6 MB/s 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.116 256+0 records in 00:05:34.116 256+0 records out 00:05:34.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174187 s, 60.2 MB/s 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.116 20:22:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@51 -- # local i 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@41 -- # break 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.117 20:22:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@41 -- # break 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.377 20:22:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@65 -- # true 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.638 20:22:52 -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.638 20:22:52 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.638 20:22:52 -- event/event.sh@35 -- # sleep 3 00:05:35.208 [2024-04-26 20:22:53.447200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.208 [2024-04-26 20:22:53.535117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.208 [2024-04-26 20:22:53.535122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.468 [2024-04-26 20:22:53.609476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.468 [2024-04-26 20:22:53.609521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.016 20:22:55 -- event/event.sh@38 -- # waitforlisten 3324796 /var/tmp/spdk-nbd.sock 00:05:38.016 20:22:55 -- common/autotest_common.sh@819 -- # '[' -z 3324796 ']' 00:05:38.016 20:22:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.016 20:22:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.016 20:22:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.016 20:22:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.016 20:22:55 -- common/autotest_common.sh@10 -- # set +x 00:05:38.016 20:22:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.016 20:22:56 -- common/autotest_common.sh@852 -- # return 0 00:05:38.016 20:22:56 -- event/event.sh@39 -- # killprocess 3324796 00:05:38.016 20:22:56 -- common/autotest_common.sh@926 -- # '[' -z 3324796 ']' 00:05:38.016 20:22:56 -- common/autotest_common.sh@930 -- # kill -0 3324796 00:05:38.016 20:22:56 -- common/autotest_common.sh@931 -- # uname 00:05:38.016 20:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.016 20:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3324796 00:05:38.016 20:22:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.016 20:22:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.016 20:22:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3324796' 00:05:38.016 killing process with pid 3324796 00:05:38.016 20:22:56 -- common/autotest_common.sh@945 -- # kill 3324796 00:05:38.016 20:22:56 -- common/autotest_common.sh@950 -- # wait 3324796 00:05:38.276 spdk_app_start is called in Round 0. 00:05:38.276 Shutdown signal received, stop current app iteration 00:05:38.276 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:38.276 spdk_app_start is called in Round 1. 00:05:38.276 Shutdown signal received, stop current app iteration 00:05:38.276 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:38.276 spdk_app_start is called in Round 2. 00:05:38.276 Shutdown signal received, stop current app iteration 00:05:38.276 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:38.276 spdk_app_start is called in Round 3. 00:05:38.276 Shutdown signal received, stop current app iteration 00:05:38.276 20:22:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:38.276 20:22:56 -- event/event.sh@42 -- # return 0 00:05:38.276 00:05:38.276 real 0m15.725s 00:05:38.276 user 0m32.966s 00:05:38.276 sys 0m2.073s 00:05:38.276 20:22:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.276 20:22:56 -- common/autotest_common.sh@10 -- # set +x 00:05:38.276 ************************************ 00:05:38.276 END TEST app_repeat 00:05:38.276 ************************************ 00:05:38.537 20:22:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:38.537 20:22:56 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.537 20:22:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.537 20:22:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.537 20:22:56 -- common/autotest_common.sh@10 -- # set +x 00:05:38.537 ************************************ 00:05:38.537 START TEST cpu_locks 00:05:38.537 ************************************ 00:05:38.537 20:22:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.537 * Looking for test storage... 00:05:38.537 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:05:38.537 20:22:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:38.537 20:22:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:38.537 20:22:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:38.537 20:22:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:38.537 20:22:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.537 20:22:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.537 20:22:56 -- common/autotest_common.sh@10 -- # set +x 00:05:38.537 ************************************ 00:05:38.537 START TEST default_locks 00:05:38.537 ************************************ 00:05:38.537 20:22:56 -- common/autotest_common.sh@1104 -- # default_locks 00:05:38.537 20:22:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3328198 00:05:38.537 20:22:56 -- event/cpu_locks.sh@47 -- # waitforlisten 3328198 00:05:38.537 20:22:56 -- common/autotest_common.sh@819 -- # '[' -z 3328198 ']' 00:05:38.537 20:22:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.537 20:22:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.537 20:22:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.537 20:22:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.537 20:22:56 -- common/autotest_common.sh@10 -- # set +x 00:05:38.537 20:22:56 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.537 [2024-04-26 20:22:56.821352] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:38.537 [2024-04-26 20:22:56.821501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328198 ] 00:05:38.798 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.798 [2024-04-26 20:22:56.939011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.798 [2024-04-26 20:22:57.035328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.798 [2024-04-26 20:22:57.035544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.369 20:22:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.369 20:22:57 -- common/autotest_common.sh@852 -- # return 0 00:05:39.369 20:22:57 -- event/cpu_locks.sh@49 -- # locks_exist 3328198 00:05:39.369 20:22:57 -- event/cpu_locks.sh@22 -- # lslocks -p 3328198 00:05:39.369 20:22:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.369 lslocks: write error 00:05:39.369 20:22:57 -- event/cpu_locks.sh@50 -- # killprocess 3328198 00:05:39.369 20:22:57 -- common/autotest_common.sh@926 -- # '[' -z 3328198 ']' 00:05:39.369 20:22:57 -- common/autotest_common.sh@930 -- # kill -0 3328198 00:05:39.369 20:22:57 -- common/autotest_common.sh@931 -- # uname 00:05:39.369 20:22:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.369 20:22:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3328198 00:05:39.628 20:22:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:39.628 20:22:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:39.628 20:22:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3328198' 00:05:39.628 killing process with pid 3328198 00:05:39.628 20:22:57 -- common/autotest_common.sh@945 -- # kill 3328198 00:05:39.628 20:22:57 -- common/autotest_common.sh@950 -- # wait 3328198 00:05:40.567 20:22:58 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3328198 00:05:40.567 20:22:58 -- common/autotest_common.sh@640 -- # local es=0 00:05:40.567 20:22:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3328198 00:05:40.567 20:22:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:40.567 20:22:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.567 20:22:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:40.567 20:22:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.567 20:22:58 -- common/autotest_common.sh@643 -- # waitforlisten 3328198 00:05:40.567 20:22:58 -- common/autotest_common.sh@819 -- # '[' -z 3328198 ']' 00:05:40.567 20:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.567 20:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.567 20:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.567 20:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.567 20:22:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.567 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3328198) - No such process 00:05:40.567 ERROR: process (pid: 3328198) is no longer running 00:05:40.567 20:22:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.567 20:22:58 -- common/autotest_common.sh@852 -- # return 1 00:05:40.567 20:22:58 -- common/autotest_common.sh@643 -- # es=1 00:05:40.567 20:22:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:40.567 20:22:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:40.567 20:22:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:40.567 20:22:58 -- event/cpu_locks.sh@54 -- # no_locks 00:05:40.567 20:22:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.567 20:22:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.567 20:22:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.567 00:05:40.567 real 0m1.848s 00:05:40.567 user 0m1.768s 00:05:40.567 sys 0m0.508s 00:05:40.567 20:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.567 20:22:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.567 ************************************ 00:05:40.567 END TEST default_locks 00:05:40.567 ************************************ 00:05:40.567 20:22:58 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:40.567 20:22:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.567 20:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.567 20:22:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.567 ************************************ 00:05:40.567 START TEST default_locks_via_rpc 00:05:40.567 ************************************ 00:05:40.567 20:22:58 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:40.567 20:22:58 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3328662 00:05:40.567 20:22:58 -- event/cpu_locks.sh@63 -- # waitforlisten 3328662 00:05:40.567 20:22:58 -- common/autotest_common.sh@819 -- # '[' -z 3328662 ']' 00:05:40.567 20:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.567 20:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.567 20:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.567 20:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.567 20:22:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.567 20:22:58 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.567 [2024-04-26 20:22:58.699319] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:40.567 [2024-04-26 20:22:58.699457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328662 ] 00:05:40.567 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.567 [2024-04-26 20:22:58.815006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.828 [2024-04-26 20:22:58.911243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.828 [2024-04-26 20:22:58.911462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.088 20:22:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.088 20:22:59 -- common/autotest_common.sh@852 -- # return 0 00:05:41.088 20:22:59 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:41.088 20:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.088 20:22:59 -- common/autotest_common.sh@10 -- # set +x 00:05:41.088 20:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.088 20:22:59 -- event/cpu_locks.sh@67 -- # no_locks 00:05:41.088 20:22:59 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.088 20:22:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.088 20:22:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.088 20:22:59 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:41.088 20:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.088 20:22:59 -- common/autotest_common.sh@10 -- # set +x 00:05:41.088 20:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.088 20:22:59 -- event/cpu_locks.sh@71 -- # locks_exist 3328662 00:05:41.088 20:22:59 -- event/cpu_locks.sh@22 -- # lslocks -p 3328662 00:05:41.088 20:22:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.348 20:22:59 -- event/cpu_locks.sh@73 -- # killprocess 3328662 00:05:41.348 20:22:59 -- common/autotest_common.sh@926 -- # '[' -z 3328662 ']' 00:05:41.348 20:22:59 -- common/autotest_common.sh@930 -- # kill -0 3328662 00:05:41.348 20:22:59 -- common/autotest_common.sh@931 -- # uname 00:05:41.348 20:22:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.348 20:22:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3328662 00:05:41.348 20:22:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:41.348 20:22:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:41.348 20:22:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3328662' 00:05:41.348 killing process with pid 3328662 00:05:41.348 20:22:59 -- common/autotest_common.sh@945 -- # kill 3328662 00:05:41.348 20:22:59 -- common/autotest_common.sh@950 -- # wait 3328662 00:05:42.288 00:05:42.288 real 0m1.802s 00:05:42.288 user 0m1.715s 00:05:42.288 sys 0m0.498s 00:05:42.288 20:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.288 20:23:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.288 ************************************ 00:05:42.288 END TEST default_locks_via_rpc 00:05:42.288 ************************************ 00:05:42.288 20:23:00 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:42.288 20:23:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.288 20:23:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.288 20:23:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.288 ************************************ 00:05:42.288 START TEST non_locking_app_on_locked_coremask 00:05:42.288 ************************************ 00:05:42.288 20:23:00 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:42.288 20:23:00 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.288 20:23:00 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3329002 00:05:42.288 20:23:00 -- event/cpu_locks.sh@81 -- # waitforlisten 3329002 /var/tmp/spdk.sock 00:05:42.288 20:23:00 -- common/autotest_common.sh@819 -- # '[' -z 3329002 ']' 00:05:42.288 20:23:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.288 20:23:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.288 20:23:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.288 20:23:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.288 20:23:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.288 [2024-04-26 20:23:00.515769] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:42.288 [2024-04-26 20:23:00.515866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329002 ] 00:05:42.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.288 [2024-04-26 20:23:00.605338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.549 [2024-04-26 20:23:00.700116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.549 [2024-04-26 20:23:00.700318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.118 20:23:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.118 20:23:01 -- common/autotest_common.sh@852 -- # return 0 00:05:43.118 20:23:01 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3329048 00:05:43.118 20:23:01 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:43.118 20:23:01 -- event/cpu_locks.sh@85 -- # waitforlisten 3329048 /var/tmp/spdk2.sock 00:05:43.118 20:23:01 -- common/autotest_common.sh@819 -- # '[' -z 3329048 ']' 00:05:43.118 20:23:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.118 20:23:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.118 20:23:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.118 20:23:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.118 20:23:01 -- common/autotest_common.sh@10 -- # set +x 00:05:43.118 [2024-04-26 20:23:01.297031] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:43.118 [2024-04-26 20:23:01.297159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329048 ] 00:05:43.118 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.118 [2024-04-26 20:23:01.452155] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.118 [2024-04-26 20:23:01.452202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.376 [2024-04-26 20:23:01.646088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.376 [2024-04-26 20:23:01.646291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.318 20:23:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.318 20:23:02 -- common/autotest_common.sh@852 -- # return 0 00:05:44.318 20:23:02 -- event/cpu_locks.sh@87 -- # locks_exist 3329002 00:05:44.318 20:23:02 -- event/cpu_locks.sh@22 -- # lslocks -p 3329002 00:05:44.318 20:23:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.887 lslocks: write error 00:05:44.887 20:23:02 -- event/cpu_locks.sh@89 -- # killprocess 3329002 00:05:44.887 20:23:02 -- common/autotest_common.sh@926 -- # '[' -z 3329002 ']' 00:05:44.887 20:23:02 -- common/autotest_common.sh@930 -- # kill -0 3329002 00:05:44.887 20:23:02 -- common/autotest_common.sh@931 -- # uname 00:05:44.887 20:23:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.887 20:23:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3329002 00:05:44.887 20:23:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.887 20:23:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.887 20:23:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3329002' 00:05:44.887 killing process with pid 3329002 00:05:44.887 20:23:02 -- common/autotest_common.sh@945 -- # kill 3329002 00:05:44.887 20:23:02 -- common/autotest_common.sh@950 -- # wait 3329002 00:05:46.811 20:23:04 -- event/cpu_locks.sh@90 -- # killprocess 3329048 00:05:46.811 20:23:04 -- common/autotest_common.sh@926 -- # '[' -z 3329048 ']' 00:05:46.811 20:23:04 -- common/autotest_common.sh@930 -- # kill -0 3329048 00:05:46.811 20:23:04 -- common/autotest_common.sh@931 -- # uname 00:05:46.811 20:23:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.811 20:23:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3329048 00:05:46.811 20:23:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.811 20:23:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.811 20:23:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3329048' 00:05:46.811 killing process with pid 3329048 00:05:46.811 20:23:04 -- common/autotest_common.sh@945 -- # kill 3329048 00:05:46.811 20:23:04 -- common/autotest_common.sh@950 -- # wait 3329048 00:05:47.448 00:05:47.448 real 0m5.110s 00:05:47.448 user 0m5.269s 00:05:47.448 sys 0m0.995s 00:05:47.448 20:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.448 20:23:05 -- common/autotest_common.sh@10 -- # set +x 00:05:47.448 ************************************ 00:05:47.448 END TEST non_locking_app_on_locked_coremask 00:05:47.448 ************************************ 00:05:47.448 20:23:05 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:47.448 20:23:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.448 20:23:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.448 20:23:05 -- common/autotest_common.sh@10 -- # set +x 00:05:47.448 ************************************ 00:05:47.448 START TEST locking_app_on_unlocked_coremask 00:05:47.448 ************************************ 00:05:47.448 20:23:05 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:47.448 20:23:05 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3329961 00:05:47.448 20:23:05 -- event/cpu_locks.sh@99 -- # waitforlisten 3329961 /var/tmp/spdk.sock 00:05:47.448 20:23:05 -- common/autotest_common.sh@819 -- # '[' -z 3329961 ']' 00:05:47.448 20:23:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.448 20:23:05 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:47.448 20:23:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.448 20:23:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.448 20:23:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.448 20:23:05 -- common/autotest_common.sh@10 -- # set +x 00:05:47.448 [2024-04-26 20:23:05.663865] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:47.448 [2024-04-26 20:23:05.663961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329961 ] 00:05:47.448 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.729 [2024-04-26 20:23:05.754625] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.729 [2024-04-26 20:23:05.754663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.729 [2024-04-26 20:23:05.846007] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.729 [2024-04-26 20:23:05.846196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.300 20:23:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.300 20:23:06 -- common/autotest_common.sh@852 -- # return 0 00:05:48.300 20:23:06 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3330239 00:05:48.300 20:23:06 -- event/cpu_locks.sh@103 -- # waitforlisten 3330239 /var/tmp/spdk2.sock 00:05:48.300 20:23:06 -- common/autotest_common.sh@819 -- # '[' -z 3330239 ']' 00:05:48.300 20:23:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.300 20:23:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.300 20:23:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.300 20:23:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.300 20:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:48.300 20:23:06 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.300 [2024-04-26 20:23:06.452722] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:48.300 [2024-04-26 20:23:06.452842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330239 ] 00:05:48.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.300 [2024-04-26 20:23:06.585690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.561 [2024-04-26 20:23:06.772084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.561 [2024-04-26 20:23:06.772276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.503 20:23:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.503 20:23:07 -- common/autotest_common.sh@852 -- # return 0 00:05:49.503 20:23:07 -- event/cpu_locks.sh@105 -- # locks_exist 3330239 00:05:49.503 20:23:07 -- event/cpu_locks.sh@22 -- # lslocks -p 3330239 00:05:49.503 20:23:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.763 lslocks: write error 00:05:49.763 20:23:07 -- event/cpu_locks.sh@107 -- # killprocess 3329961 00:05:49.763 20:23:07 -- common/autotest_common.sh@926 -- # '[' -z 3329961 ']' 00:05:49.763 20:23:07 -- common/autotest_common.sh@930 -- # kill -0 3329961 00:05:49.763 20:23:07 -- common/autotest_common.sh@931 -- # uname 00:05:49.763 20:23:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.763 20:23:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3329961 00:05:49.763 20:23:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.763 20:23:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.763 20:23:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3329961' 00:05:49.763 killing process with pid 3329961 00:05:49.763 20:23:08 -- common/autotest_common.sh@945 -- # kill 3329961 00:05:49.763 20:23:08 -- common/autotest_common.sh@950 -- # wait 3329961 00:05:51.676 20:23:09 -- event/cpu_locks.sh@108 -- # killprocess 3330239 00:05:51.676 20:23:09 -- common/autotest_common.sh@926 -- # '[' -z 3330239 ']' 00:05:51.676 20:23:09 -- common/autotest_common.sh@930 -- # kill -0 3330239 00:05:51.676 20:23:09 -- common/autotest_common.sh@931 -- # uname 00:05:51.676 20:23:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.676 20:23:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3330239 00:05:51.676 20:23:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.676 20:23:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.676 20:23:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3330239' 00:05:51.676 killing process with pid 3330239 00:05:51.676 20:23:09 -- common/autotest_common.sh@945 -- # kill 3330239 00:05:51.676 20:23:09 -- common/autotest_common.sh@950 -- # wait 3330239 00:05:52.246 00:05:52.246 real 0m4.926s 00:05:52.246 user 0m5.086s 00:05:52.246 sys 0m0.853s 00:05:52.246 20:23:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.246 20:23:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.246 ************************************ 00:05:52.246 END TEST locking_app_on_unlocked_coremask 00:05:52.246 ************************************ 00:05:52.246 20:23:10 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.246 20:23:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.246 20:23:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.246 20:23:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.246 ************************************ 00:05:52.246 START TEST locking_app_on_locked_coremask 00:05:52.246 ************************************ 00:05:52.246 20:23:10 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:52.246 20:23:10 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3330924 00:05:52.246 20:23:10 -- event/cpu_locks.sh@116 -- # waitforlisten 3330924 /var/tmp/spdk.sock 00:05:52.246 20:23:10 -- common/autotest_common.sh@819 -- # '[' -z 3330924 ']' 00:05:52.246 20:23:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.246 20:23:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.246 20:23:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.246 20:23:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.246 20:23:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.246 20:23:10 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.507 [2024-04-26 20:23:10.657260] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:52.507 [2024-04-26 20:23:10.657401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330924 ] 00:05:52.507 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.507 [2024-04-26 20:23:10.774792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.768 [2024-04-26 20:23:10.870708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.768 [2024-04-26 20:23:10.870904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.028 20:23:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.028 20:23:11 -- common/autotest_common.sh@852 -- # return 0 00:05:53.028 20:23:11 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3331216 00:05:53.028 20:23:11 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3331216 /var/tmp/spdk2.sock 00:05:53.028 20:23:11 -- common/autotest_common.sh@640 -- # local es=0 00:05:53.028 20:23:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3331216 /var/tmp/spdk2.sock 00:05:53.028 20:23:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:53.028 20:23:11 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.028 20:23:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.028 20:23:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:53.028 20:23:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.028 20:23:11 -- common/autotest_common.sh@643 -- # waitforlisten 3331216 /var/tmp/spdk2.sock 00:05:53.028 20:23:11 -- common/autotest_common.sh@819 -- # '[' -z 3331216 ']' 00:05:53.028 20:23:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.028 20:23:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.028 20:23:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.028 20:23:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.028 20:23:11 -- common/autotest_common.sh@10 -- # set +x 00:05:53.288 [2024-04-26 20:23:11.414701] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:53.288 [2024-04-26 20:23:11.414781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331216 ] 00:05:53.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.288 [2024-04-26 20:23:11.535283] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3330924 has claimed it. 00:05:53.288 [2024-04-26 20:23:11.535330] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.860 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3331216) - No such process 00:05:53.860 ERROR: process (pid: 3331216) is no longer running 00:05:53.860 20:23:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.860 20:23:11 -- common/autotest_common.sh@852 -- # return 1 00:05:53.860 20:23:11 -- common/autotest_common.sh@643 -- # es=1 00:05:53.860 20:23:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:53.860 20:23:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:53.860 20:23:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:53.860 20:23:11 -- event/cpu_locks.sh@122 -- # locks_exist 3330924 00:05:53.860 20:23:11 -- event/cpu_locks.sh@22 -- # lslocks -p 3330924 00:05:53.860 20:23:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.860 lslocks: write error 00:05:53.860 20:23:12 -- event/cpu_locks.sh@124 -- # killprocess 3330924 00:05:53.860 20:23:12 -- common/autotest_common.sh@926 -- # '[' -z 3330924 ']' 00:05:53.861 20:23:12 -- common/autotest_common.sh@930 -- # kill -0 3330924 00:05:53.861 20:23:12 -- common/autotest_common.sh@931 -- # uname 00:05:53.861 20:23:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.121 20:23:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3330924 00:05:54.121 20:23:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.121 20:23:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.121 20:23:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3330924' 00:05:54.121 killing process with pid 3330924 00:05:54.121 20:23:12 -- common/autotest_common.sh@945 -- # kill 3330924 00:05:54.121 20:23:12 -- common/autotest_common.sh@950 -- # wait 3330924 00:05:55.075 00:05:55.075 real 0m2.546s 00:05:55.075 user 0m2.583s 00:05:55.075 sys 0m0.664s 00:05:55.075 20:23:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.075 20:23:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 ************************************ 00:05:55.075 END TEST locking_app_on_locked_coremask 00:05:55.075 ************************************ 00:05:55.075 20:23:13 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:55.076 20:23:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.076 20:23:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.076 20:23:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.076 ************************************ 00:05:55.076 START TEST locking_overlapped_coremask 00:05:55.076 ************************************ 00:05:55.076 20:23:13 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:55.076 20:23:13 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3331549 00:05:55.076 20:23:13 -- event/cpu_locks.sh@133 -- # waitforlisten 3331549 /var/tmp/spdk.sock 00:05:55.076 20:23:13 -- common/autotest_common.sh@819 -- # '[' -z 3331549 ']' 00:05:55.076 20:23:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.076 20:23:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.076 20:23:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.076 20:23:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.076 20:23:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.076 20:23:13 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.076 [2024-04-26 20:23:13.259552] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:55.076 [2024-04-26 20:23:13.259698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331549 ] 00:05:55.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.076 [2024-04-26 20:23:13.390409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.342 [2024-04-26 20:23:13.487072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.342 [2024-04-26 20:23:13.487366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.342 [2024-04-26 20:23:13.487465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.342 [2024-04-26 20:23:13.487469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.913 20:23:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.913 20:23:13 -- common/autotest_common.sh@852 -- # return 0 00:05:55.913 20:23:13 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3331772 00:05:55.913 20:23:13 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3331772 /var/tmp/spdk2.sock 00:05:55.913 20:23:13 -- common/autotest_common.sh@640 -- # local es=0 00:05:55.913 20:23:13 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3331772 /var/tmp/spdk2.sock 00:05:55.913 20:23:13 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:55.913 20:23:13 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.913 20:23:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:55.913 20:23:13 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:55.913 20:23:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:55.913 20:23:13 -- common/autotest_common.sh@643 -- # waitforlisten 3331772 /var/tmp/spdk2.sock 00:05:55.913 20:23:13 -- common/autotest_common.sh@819 -- # '[' -z 3331772 ']' 00:05:55.913 20:23:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.913 20:23:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.913 20:23:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.913 20:23:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.913 20:23:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.914 [2024-04-26 20:23:14.057671] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:55.914 [2024-04-26 20:23:14.057812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331772 ] 00:05:55.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.914 [2024-04-26 20:23:14.227935] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3331549 has claimed it. 00:05:55.914 [2024-04-26 20:23:14.227994] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.485 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3331772) - No such process 00:05:56.485 ERROR: process (pid: 3331772) is no longer running 00:05:56.485 20:23:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.485 20:23:14 -- common/autotest_common.sh@852 -- # return 1 00:05:56.485 20:23:14 -- common/autotest_common.sh@643 -- # es=1 00:05:56.485 20:23:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:56.485 20:23:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:56.485 20:23:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:56.485 20:23:14 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:56.485 20:23:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.485 20:23:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.485 20:23:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.485 20:23:14 -- event/cpu_locks.sh@141 -- # killprocess 3331549 00:05:56.486 20:23:14 -- common/autotest_common.sh@926 -- # '[' -z 3331549 ']' 00:05:56.486 20:23:14 -- common/autotest_common.sh@930 -- # kill -0 3331549 00:05:56.486 20:23:14 -- common/autotest_common.sh@931 -- # uname 00:05:56.486 20:23:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.486 20:23:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3331549 00:05:56.486 20:23:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.486 20:23:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.486 20:23:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3331549' 00:05:56.486 killing process with pid 3331549 00:05:56.486 20:23:14 -- common/autotest_common.sh@945 -- # kill 3331549 00:05:56.486 20:23:14 -- common/autotest_common.sh@950 -- # wait 3331549 00:05:57.428 00:05:57.428 real 0m2.407s 00:05:57.428 user 0m6.165s 00:05:57.428 sys 0m0.616s 00:05:57.428 20:23:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.428 20:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.428 ************************************ 00:05:57.428 END TEST locking_overlapped_coremask 00:05:57.428 ************************************ 00:05:57.428 20:23:15 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.428 20:23:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.429 20:23:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.429 20:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.429 ************************************ 00:05:57.429 START TEST locking_overlapped_coremask_via_rpc 00:05:57.429 ************************************ 00:05:57.429 20:23:15 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:57.429 20:23:15 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3332164 00:05:57.429 20:23:15 -- event/cpu_locks.sh@149 -- # waitforlisten 3332164 /var/tmp/spdk.sock 00:05:57.429 20:23:15 -- common/autotest_common.sh@819 -- # '[' -z 3332164 ']' 00:05:57.429 20:23:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.429 20:23:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.429 20:23:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.429 20:23:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.429 20:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.429 20:23:15 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.429 [2024-04-26 20:23:15.701727] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:57.429 [2024-04-26 20:23:15.701866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332164 ] 00:05:57.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.689 [2024-04-26 20:23:15.832191] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.689 [2024-04-26 20:23:15.832247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.689 [2024-04-26 20:23:15.928095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.689 [2024-04-26 20:23:15.928400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.689 [2024-04-26 20:23:15.928488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.689 [2024-04-26 20:23:15.928493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.260 20:23:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.260 20:23:16 -- common/autotest_common.sh@852 -- # return 0 00:05:58.260 20:23:16 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3332208 00:05:58.260 20:23:16 -- event/cpu_locks.sh@153 -- # waitforlisten 3332208 /var/tmp/spdk2.sock 00:05:58.260 20:23:16 -- common/autotest_common.sh@819 -- # '[' -z 3332208 ']' 00:05:58.260 20:23:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.260 20:23:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.260 20:23:16 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:58.260 20:23:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.260 20:23:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.260 20:23:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.260 [2024-04-26 20:23:16.502950] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:58.260 [2024-04-26 20:23:16.503090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332208 ] 00:05:58.260 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.520 [2024-04-26 20:23:16.672920] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.520 [2024-04-26 20:23:16.672972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.781 [2024-04-26 20:23:16.873399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.781 [2024-04-26 20:23:16.873671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.781 [2024-04-26 20:23:16.877447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.781 [2024-04-26 20:23:16.877478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:59.725 20:23:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.725 20:23:17 -- common/autotest_common.sh@852 -- # return 0 00:05:59.725 20:23:17 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.725 20:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.725 20:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.725 20:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.725 20:23:17 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.725 20:23:17 -- common/autotest_common.sh@640 -- # local es=0 00:05:59.725 20:23:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.725 20:23:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:59.725 20:23:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:59.725 20:23:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:59.725 20:23:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:59.725 20:23:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.725 20:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.725 20:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.725 [2024-04-26 20:23:17.886500] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3332164 has claimed it. 00:05:59.725 request: 00:05:59.725 { 00:05:59.725 "method": "framework_enable_cpumask_locks", 00:05:59.725 "req_id": 1 00:05:59.725 } 00:05:59.725 Got JSON-RPC error response 00:05:59.725 response: 00:05:59.725 { 00:05:59.725 "code": -32603, 00:05:59.725 "message": "Failed to claim CPU core: 2" 00:05:59.725 } 00:05:59.725 20:23:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:59.725 20:23:17 -- common/autotest_common.sh@643 -- # es=1 00:05:59.725 20:23:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:59.725 20:23:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:59.725 20:23:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:59.725 20:23:17 -- event/cpu_locks.sh@158 -- # waitforlisten 3332164 /var/tmp/spdk.sock 00:05:59.725 20:23:17 -- common/autotest_common.sh@819 -- # '[' -z 3332164 ']' 00:05:59.725 20:23:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.725 20:23:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.725 20:23:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.725 20:23:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.725 20:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.725 20:23:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.725 20:23:18 -- common/autotest_common.sh@852 -- # return 0 00:05:59.725 20:23:18 -- event/cpu_locks.sh@159 -- # waitforlisten 3332208 /var/tmp/spdk2.sock 00:05:59.725 20:23:18 -- common/autotest_common.sh@819 -- # '[' -z 3332208 ']' 00:05:59.725 20:23:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.725 20:23:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.725 20:23:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.725 20:23:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.725 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.985 20:23:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.985 20:23:18 -- common/autotest_common.sh@852 -- # return 0 00:05:59.985 20:23:18 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.985 20:23:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.985 20:23:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.985 20:23:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.985 00:05:59.985 real 0m2.606s 00:05:59.985 user 0m0.837s 00:05:59.985 sys 0m0.185s 00:05:59.985 20:23:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.985 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.985 ************************************ 00:05:59.985 END TEST locking_overlapped_coremask_via_rpc 00:05:59.985 ************************************ 00:05:59.985 20:23:18 -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.985 20:23:18 -- event/cpu_locks.sh@15 -- # [[ -z 3332164 ]] 00:05:59.985 20:23:18 -- event/cpu_locks.sh@15 -- # killprocess 3332164 00:05:59.985 20:23:18 -- common/autotest_common.sh@926 -- # '[' -z 3332164 ']' 00:05:59.985 20:23:18 -- common/autotest_common.sh@930 -- # kill -0 3332164 00:05:59.985 20:23:18 -- common/autotest_common.sh@931 -- # uname 00:05:59.985 20:23:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.985 20:23:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3332164 00:05:59.985 20:23:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.985 20:23:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.985 20:23:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3332164' 00:05:59.985 killing process with pid 3332164 00:05:59.985 20:23:18 -- common/autotest_common.sh@945 -- # kill 3332164 00:05:59.985 20:23:18 -- common/autotest_common.sh@950 -- # wait 3332164 00:06:00.927 20:23:19 -- event/cpu_locks.sh@16 -- # [[ -z 3332208 ]] 00:06:00.927 20:23:19 -- event/cpu_locks.sh@16 -- # killprocess 3332208 00:06:00.927 20:23:19 -- common/autotest_common.sh@926 -- # '[' -z 3332208 ']' 00:06:00.927 20:23:19 -- common/autotest_common.sh@930 -- # kill -0 3332208 00:06:00.927 20:23:19 -- common/autotest_common.sh@931 -- # uname 00:06:00.927 20:23:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.927 20:23:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3332208 00:06:00.927 20:23:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:00.927 20:23:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:00.927 20:23:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3332208' 00:06:00.927 killing process with pid 3332208 00:06:00.927 20:23:19 -- common/autotest_common.sh@945 -- # kill 3332208 00:06:00.927 20:23:19 -- common/autotest_common.sh@950 -- # wait 3332208 00:06:01.973 20:23:20 -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.973 20:23:20 -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.973 20:23:20 -- event/cpu_locks.sh@15 -- # [[ -z 3332164 ]] 00:06:01.973 20:23:20 -- event/cpu_locks.sh@15 -- # killprocess 3332164 00:06:01.973 20:23:20 -- common/autotest_common.sh@926 -- # '[' -z 3332164 ']' 00:06:01.973 20:23:20 -- common/autotest_common.sh@930 -- # kill -0 3332164 00:06:01.973 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3332164) - No such process 00:06:01.973 20:23:20 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3332164 is not found' 00:06:01.973 Process with pid 3332164 is not found 00:06:01.973 20:23:20 -- event/cpu_locks.sh@16 -- # [[ -z 3332208 ]] 00:06:01.973 20:23:20 -- event/cpu_locks.sh@16 -- # killprocess 3332208 00:06:01.973 20:23:20 -- common/autotest_common.sh@926 -- # '[' -z 3332208 ']' 00:06:01.973 20:23:20 -- common/autotest_common.sh@930 -- # kill -0 3332208 00:06:01.973 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3332208) - No such process 00:06:01.973 20:23:20 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3332208 is not found' 00:06:01.973 Process with pid 3332208 is not found 00:06:01.973 20:23:20 -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.973 00:06:01.973 real 0m23.417s 00:06:01.973 user 0m39.725s 00:06:01.973 sys 0m5.325s 00:06:01.973 20:23:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.973 20:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.973 ************************************ 00:06:01.973 END TEST cpu_locks 00:06:01.973 ************************************ 00:06:01.973 00:06:01.973 real 0m47.967s 00:06:01.973 user 1m26.841s 00:06:01.973 sys 0m8.574s 00:06:01.973 20:23:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.973 20:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.973 ************************************ 00:06:01.973 END TEST event 00:06:01.973 ************************************ 00:06:01.973 20:23:20 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:06:01.973 20:23:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.973 20:23:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.973 20:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.973 ************************************ 00:06:01.973 START TEST thread 00:06:01.973 ************************************ 00:06:01.973 20:23:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:06:01.973 * Looking for test storage... 00:06:01.973 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:06:01.973 20:23:20 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.973 20:23:20 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:01.973 20:23:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.973 20:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.973 ************************************ 00:06:01.973 START TEST thread_poller_perf 00:06:01.973 ************************************ 00:06:01.973 20:23:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.974 [2024-04-26 20:23:20.246607] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:01.974 [2024-04-26 20:23:20.246749] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333131 ] 00:06:02.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.239 [2024-04-26 20:23:20.378250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.239 [2024-04-26 20:23:20.476889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.239 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:03.624 ====================================== 00:06:03.624 busy:1908682162 (cyc) 00:06:03.624 total_run_count: 381000 00:06:03.624 tsc_hz: 1900000000 (cyc) 00:06:03.624 ====================================== 00:06:03.624 poller_cost: 5009 (cyc), 2636 (nsec) 00:06:03.624 00:06:03.624 real 0m1.449s 00:06:03.624 user 0m1.287s 00:06:03.624 sys 0m0.155s 00:06:03.624 20:23:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.624 20:23:21 -- common/autotest_common.sh@10 -- # set +x 00:06:03.624 ************************************ 00:06:03.624 END TEST thread_poller_perf 00:06:03.624 ************************************ 00:06:03.624 20:23:21 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.624 20:23:21 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:03.624 20:23:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.624 20:23:21 -- common/autotest_common.sh@10 -- # set +x 00:06:03.624 ************************************ 00:06:03.624 START TEST thread_poller_perf 00:06:03.624 ************************************ 00:06:03.624 20:23:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.624 [2024-04-26 20:23:21.743139] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:03.624 [2024-04-26 20:23:21.743283] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333446 ] 00:06:03.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.624 [2024-04-26 20:23:21.875217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.886 [2024-04-26 20:23:21.972697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.886 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.826 ====================================== 00:06:04.826 busy:1902654974 (cyc) 00:06:04.826 total_run_count: 5291000 00:06:04.827 tsc_hz: 1900000000 (cyc) 00:06:04.827 ====================================== 00:06:04.827 poller_cost: 359 (cyc), 188 (nsec) 00:06:04.827 00:06:04.827 real 0m1.444s 00:06:04.827 user 0m1.287s 00:06:04.827 sys 0m0.150s 00:06:04.827 20:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.827 20:23:23 -- common/autotest_common.sh@10 -- # set +x 00:06:04.827 ************************************ 00:06:04.827 END TEST thread_poller_perf 00:06:04.827 ************************************ 00:06:05.087 20:23:23 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.087 00:06:05.087 real 0m3.042s 00:06:05.087 user 0m2.628s 00:06:05.087 sys 0m0.419s 00:06:05.087 20:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.087 20:23:23 -- common/autotest_common.sh@10 -- # set +x 00:06:05.087 ************************************ 00:06:05.087 END TEST thread 00:06:05.087 ************************************ 00:06:05.087 20:23:23 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:06:05.087 20:23:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.087 20:23:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.087 20:23:23 -- common/autotest_common.sh@10 -- # set +x 00:06:05.087 ************************************ 00:06:05.087 START TEST accel 00:06:05.087 ************************************ 00:06:05.087 20:23:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:06:05.087 * Looking for test storage... 00:06:05.087 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:06:05.087 20:23:23 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:05.087 20:23:23 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:05.087 20:23:23 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.087 20:23:23 -- accel/accel.sh@59 -- # spdk_tgt_pid=3333867 00:06:05.087 20:23:23 -- accel/accel.sh@60 -- # waitforlisten 3333867 00:06:05.087 20:23:23 -- common/autotest_common.sh@819 -- # '[' -z 3333867 ']' 00:06:05.087 20:23:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.087 20:23:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.087 20:23:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.087 20:23:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.087 20:23:23 -- common/autotest_common.sh@10 -- # set +x 00:06:05.087 20:23:23 -- accel/accel.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:05.087 20:23:23 -- accel/accel.sh@58 -- # build_accel_config 00:06:05.087 20:23:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.087 20:23:23 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:05.087 20:23:23 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:05.087 20:23:23 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:05.087 20:23:23 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:05.087 20:23:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.087 20:23:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.087 20:23:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.087 20:23:23 -- accel/accel.sh@42 -- # jq -r . 00:06:05.087 [2024-04-26 20:23:23.398977] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:05.087 [2024-04-26 20:23:23.399119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333867 ] 00:06:05.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.348 [2024-04-26 20:23:23.529414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.348 [2024-04-26 20:23:23.626205] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.348 [2024-04-26 20:23:23.626445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.348 [2024-04-26 20:23:23.631033] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:05.348 [2024-04-26 20:23:23.638990] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:15.355 20:23:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.355 20:23:31 -- common/autotest_common.sh@852 -- # return 0 00:06:15.355 20:23:31 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:15.355 20:23:31 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:15.355 20:23:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.355 20:23:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.355 20:23:31 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:15.355 20:23:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=iaa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=iaa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:15.355 20:23:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # IFS== 00:06:15.355 20:23:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:15.355 20:23:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:06:15.355 20:23:31 -- accel/accel.sh@67 -- # killprocess 3333867 00:06:15.355 20:23:31 -- common/autotest_common.sh@926 -- # '[' -z 3333867 ']' 00:06:15.355 20:23:31 -- common/autotest_common.sh@930 -- # kill -0 3333867 00:06:15.355 20:23:31 -- common/autotest_common.sh@931 -- # uname 00:06:15.355 20:23:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.355 20:23:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3333867 00:06:15.355 20:23:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:15.355 20:23:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:15.355 20:23:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3333867' 00:06:15.355 killing process with pid 3333867 00:06:15.355 20:23:31 -- common/autotest_common.sh@945 -- # kill 3333867 00:06:15.355 20:23:31 -- common/autotest_common.sh@950 -- # wait 3333867 00:06:17.266 20:23:35 -- accel/accel.sh@68 -- # trap - ERR 00:06:17.266 20:23:35 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:17.266 20:23:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:17.266 20:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.266 20:23:35 -- common/autotest_common.sh@10 -- # set +x 00:06:17.266 20:23:35 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:17.266 20:23:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:17.266 20:23:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.266 20:23:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.266 20:23:35 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:17.266 20:23:35 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:17.266 20:23:35 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:17.266 20:23:35 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:17.266 20:23:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.266 20:23:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.266 20:23:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.266 20:23:35 -- accel/accel.sh@42 -- # jq -r . 00:06:17.266 20:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.266 20:23:35 -- common/autotest_common.sh@10 -- # set +x 00:06:17.266 20:23:35 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:17.266 20:23:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:17.266 20:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.266 20:23:35 -- common/autotest_common.sh@10 -- # set +x 00:06:17.266 ************************************ 00:06:17.266 START TEST accel_missing_filename 00:06:17.266 ************************************ 00:06:17.266 20:23:35 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:17.266 20:23:35 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.266 20:23:35 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:17.266 20:23:35 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:17.266 20:23:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.266 20:23:35 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:17.266 20:23:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.266 20:23:35 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:17.266 20:23:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:17.266 20:23:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.266 20:23:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.266 20:23:35 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:17.266 20:23:35 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:17.266 20:23:35 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:17.266 20:23:35 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:17.266 20:23:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.266 20:23:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.266 20:23:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.266 20:23:35 -- accel/accel.sh@42 -- # jq -r . 00:06:17.266 [2024-04-26 20:23:35.396690] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:17.266 [2024-04-26 20:23:35.396806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336254 ] 00:06:17.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.266 [2024-04-26 20:23:35.525056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.527 [2024-04-26 20:23:35.624222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.527 [2024-04-26 20:23:35.628863] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:17.527 [2024-04-26 20:23:35.636834] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:24.103 [2024-04-26 20:23:42.026572] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.009 [2024-04-26 20:23:43.883630] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:26.009 A filename is required. 00:06:26.009 20:23:44 -- common/autotest_common.sh@643 -- # es=234 00:06:26.009 20:23:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.009 20:23:44 -- common/autotest_common.sh@652 -- # es=106 00:06:26.009 20:23:44 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:26.009 20:23:44 -- common/autotest_common.sh@660 -- # es=1 00:06:26.009 20:23:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.009 00:06:26.009 real 0m8.685s 00:06:26.009 user 0m2.296s 00:06:26.009 sys 0m0.246s 00:06:26.009 20:23:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.009 20:23:44 -- common/autotest_common.sh@10 -- # set +x 00:06:26.009 ************************************ 00:06:26.009 END TEST accel_missing_filename 00:06:26.009 ************************************ 00:06:26.009 20:23:44 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:26.009 20:23:44 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:26.009 20:23:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.009 20:23:44 -- common/autotest_common.sh@10 -- # set +x 00:06:26.009 ************************************ 00:06:26.009 START TEST accel_compress_verify 00:06:26.009 ************************************ 00:06:26.009 20:23:44 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:26.009 20:23:44 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.009 20:23:44 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:26.009 20:23:44 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:26.009 20:23:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.009 20:23:44 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:26.009 20:23:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.009 20:23:44 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:26.009 20:23:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:26.009 20:23:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.009 20:23:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.009 20:23:44 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:26.009 20:23:44 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:26.009 20:23:44 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:26.009 20:23:44 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:26.009 20:23:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.009 20:23:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.009 20:23:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.009 20:23:44 -- accel/accel.sh@42 -- # jq -r . 00:06:26.009 [2024-04-26 20:23:44.119318] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:26.010 [2024-04-26 20:23:44.119449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337921 ] 00:06:26.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.010 [2024-04-26 20:23:44.251495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.010 [2024-04-26 20:23:44.342255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.010 [2024-04-26 20:23:44.346870] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:26.270 [2024-04-26 20:23:44.354840] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:32.848 [2024-04-26 20:23:50.766242] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.763 [2024-04-26 20:23:52.615606] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:34.763 00:06:34.763 Compression does not support the verify option, aborting. 00:06:34.763 20:23:52 -- common/autotest_common.sh@643 -- # es=161 00:06:34.763 20:23:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.763 20:23:52 -- common/autotest_common.sh@652 -- # es=33 00:06:34.763 20:23:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:34.763 20:23:52 -- common/autotest_common.sh@660 -- # es=1 00:06:34.763 20:23:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.763 00:06:34.763 real 0m8.702s 00:06:34.763 user 0m2.308s 00:06:34.763 sys 0m0.254s 00:06:34.763 20:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.763 20:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.763 ************************************ 00:06:34.763 END TEST accel_compress_verify 00:06:34.763 ************************************ 00:06:34.763 20:23:52 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:34.763 20:23:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:34.763 20:23:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.763 20:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.763 ************************************ 00:06:34.763 START TEST accel_wrong_workload 00:06:34.763 ************************************ 00:06:34.763 20:23:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:34.763 20:23:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:34.763 20:23:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:34.763 20:23:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:34.763 20:23:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.763 20:23:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:34.763 20:23:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.763 20:23:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:34.763 20:23:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:34.763 20:23:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.763 20:23:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.763 20:23:52 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:34.763 20:23:52 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:34.763 20:23:52 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:34.763 20:23:52 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:34.763 20:23:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.763 20:23:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.763 20:23:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.763 20:23:52 -- accel/accel.sh@42 -- # jq -r . 00:06:34.763 Unsupported workload type: foobar 00:06:34.763 [2024-04-26 20:23:52.860923] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:34.763 accel_perf options: 00:06:34.763 [-h help message] 00:06:34.763 [-q queue depth per core] 00:06:34.763 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.763 [-T number of threads per core 00:06:34.763 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.763 [-t time in seconds] 00:06:34.763 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.763 [ dif_verify, , dif_generate, dif_generate_copy 00:06:34.763 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.763 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.763 [-S for crc32c workload, use this seed value (default 0) 00:06:34.763 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.763 [-f for fill workload, use this BYTE value (default 255) 00:06:34.763 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.763 [-y verify result if this switch is on] 00:06:34.763 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.763 Can be used to spread operations across a wider range of memory. 00:06:34.763 20:23:52 -- common/autotest_common.sh@643 -- # es=1 00:06:34.763 20:23:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.763 20:23:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:34.763 20:23:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.763 00:06:34.763 real 0m0.064s 00:06:34.763 user 0m0.056s 00:06:34.763 sys 0m0.040s 00:06:34.763 20:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.763 20:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.763 ************************************ 00:06:34.763 END TEST accel_wrong_workload 00:06:34.763 ************************************ 00:06:34.763 20:23:52 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.763 20:23:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:34.763 20:23:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.763 20:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.763 ************************************ 00:06:34.763 START TEST accel_negative_buffers 00:06:34.763 ************************************ 00:06:34.763 20:23:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.763 20:23:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:34.763 20:23:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:34.763 20:23:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:34.763 20:23:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.763 20:23:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:34.763 20:23:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.763 20:23:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:34.763 20:23:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:34.763 20:23:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.763 20:23:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.763 20:23:52 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:34.763 20:23:52 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:34.763 20:23:52 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:34.763 20:23:52 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:34.763 20:23:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.763 20:23:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.763 20:23:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.763 20:23:52 -- accel/accel.sh@42 -- # jq -r . 00:06:34.763 -x option must be non-negative. 00:06:34.763 [2024-04-26 20:23:52.955376] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:34.763 accel_perf options: 00:06:34.763 [-h help message] 00:06:34.763 [-q queue depth per core] 00:06:34.763 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.763 [-T number of threads per core 00:06:34.763 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.763 [-t time in seconds] 00:06:34.763 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.763 [ dif_verify, , dif_generate, dif_generate_copy 00:06:34.763 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.763 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.763 [-S for crc32c workload, use this seed value (default 0) 00:06:34.763 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.763 [-f for fill workload, use this BYTE value (default 255) 00:06:34.763 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.763 [-y verify result if this switch is on] 00:06:34.763 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.763 Can be used to spread operations across a wider range of memory. 00:06:34.763 20:23:52 -- common/autotest_common.sh@643 -- # es=1 00:06:34.763 20:23:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.763 20:23:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:34.763 20:23:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.763 00:06:34.763 real 0m0.056s 00:06:34.763 user 0m0.056s 00:06:34.763 sys 0m0.033s 00:06:34.763 20:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.763 20:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.763 ************************************ 00:06:34.763 END TEST accel_negative_buffers 00:06:34.763 ************************************ 00:06:34.763 20:23:53 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:34.763 20:23:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:34.763 20:23:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.763 20:23:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.763 ************************************ 00:06:34.763 START TEST accel_crc32c 00:06:34.763 ************************************ 00:06:34.763 20:23:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:34.763 20:23:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.763 20:23:53 -- accel/accel.sh@17 -- # local accel_module 00:06:34.763 20:23:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:34.763 20:23:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:34.763 20:23:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.763 20:23:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.763 20:23:53 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:34.764 20:23:53 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:34.764 20:23:53 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:34.764 20:23:53 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:34.764 20:23:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.764 20:23:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.764 20:23:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.764 20:23:53 -- accel/accel.sh@42 -- # jq -r . 00:06:34.764 [2024-04-26 20:23:53.042841] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:34.764 [2024-04-26 20:23:53.042947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339768 ] 00:06:35.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.023 [2024-04-26 20:23:53.159715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.023 [2024-04-26 20:23:53.261672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.023 [2024-04-26 20:23:53.266214] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:35.023 [2024-04-26 20:23:53.274198] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:45.026 20:24:02 -- accel/accel.sh@18 -- # out=' 00:06:45.026 SPDK Configuration: 00:06:45.026 Core mask: 0x1 00:06:45.026 00:06:45.026 Accel Perf Configuration: 00:06:45.026 Workload Type: crc32c 00:06:45.026 CRC-32C seed: 32 00:06:45.026 Transfer size: 4096 bytes 00:06:45.026 Vector count 1 00:06:45.026 Module: dsa 00:06:45.026 Queue depth: 32 00:06:45.026 Allocate depth: 32 00:06:45.026 # threads/core: 1 00:06:45.026 Run time: 1 seconds 00:06:45.026 Verify: Yes 00:06:45.026 00:06:45.026 Running for 1 seconds... 00:06:45.026 00:06:45.026 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.026 ------------------------------------------------------------------------------------ 00:06:45.026 0,0 353824/s 1382 MiB/s 0 0 00:06:45.026 ==================================================================================== 00:06:45.026 Total 353824/s 1382 MiB/s 0 0' 00:06:45.026 20:24:02 -- accel/accel.sh@20 -- # IFS=: 00:06:45.026 20:24:02 -- accel/accel.sh@20 -- # read -r var val 00:06:45.026 20:24:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:45.026 20:24:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:45.026 20:24:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.026 20:24:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.026 20:24:02 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:45.026 20:24:02 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:45.026 20:24:02 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:45.026 20:24:02 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:45.026 20:24:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.026 20:24:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.026 20:24:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.026 20:24:02 -- accel/accel.sh@42 -- # jq -r . 00:06:45.026 [2024-04-26 20:24:02.741766] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:45.026 [2024-04-26 20:24:02.741898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341956 ] 00:06:45.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.026 [2024-04-26 20:24:02.858001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.026 [2024-04-26 20:24:02.953326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.026 [2024-04-26 20:24:02.957889] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:45.026 [2024-04-26 20:24:02.965867] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val= 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val= 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val=0x1 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val= 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val= 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val=crc32c 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val=32 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val= 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val=dsa 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@23 -- # accel_module=dsa 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val=32 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val=32 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val=1 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val=Yes 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val= 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.617 20:24:09 -- accel/accel.sh@21 -- # val= 00:06:51.617 20:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # IFS=: 00:06:51.617 20:24:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.163 20:24:12 -- accel/accel.sh@21 -- # val= 00:06:54.163 20:24:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # IFS=: 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.163 20:24:12 -- accel/accel.sh@21 -- # val= 00:06:54.163 20:24:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # IFS=: 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.163 20:24:12 -- accel/accel.sh@21 -- # val= 00:06:54.163 20:24:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # IFS=: 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.163 20:24:12 -- accel/accel.sh@21 -- # val= 00:06:54.163 20:24:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # IFS=: 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.163 20:24:12 -- accel/accel.sh@21 -- # val= 00:06:54.163 20:24:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # IFS=: 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.163 20:24:12 -- accel/accel.sh@21 -- # val= 00:06:54.163 20:24:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # IFS=: 00:06:54.163 20:24:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.163 20:24:12 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:06:54.163 20:24:12 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:54.163 20:24:12 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:06:54.163 00:06:54.163 real 0m19.381s 00:06:54.163 user 0m6.550s 00:06:54.163 sys 0m0.468s 00:06:54.164 20:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.164 20:24:12 -- common/autotest_common.sh@10 -- # set +x 00:06:54.164 ************************************ 00:06:54.164 END TEST accel_crc32c 00:06:54.164 ************************************ 00:06:54.164 20:24:12 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:54.164 20:24:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:54.164 20:24:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.164 20:24:12 -- common/autotest_common.sh@10 -- # set +x 00:06:54.164 ************************************ 00:06:54.164 START TEST accel_crc32c_C2 00:06:54.164 ************************************ 00:06:54.164 20:24:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:54.164 20:24:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.164 20:24:12 -- accel/accel.sh@17 -- # local accel_module 00:06:54.164 20:24:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:54.164 20:24:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:54.164 20:24:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.164 20:24:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.164 20:24:12 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:54.164 20:24:12 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:54.164 20:24:12 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:54.164 20:24:12 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:54.164 20:24:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.164 20:24:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.164 20:24:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.164 20:24:12 -- accel/accel.sh@42 -- # jq -r . 00:06:54.164 [2024-04-26 20:24:12.471172] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:54.164 [2024-04-26 20:24:12.471309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344243 ] 00:06:54.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.425 [2024-04-26 20:24:12.599536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.425 [2024-04-26 20:24:12.693584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.425 [2024-04-26 20:24:12.698184] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:54.425 [2024-04-26 20:24:12.706157] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:04.427 20:24:22 -- accel/accel.sh@18 -- # out=' 00:07:04.427 SPDK Configuration: 00:07:04.427 Core mask: 0x1 00:07:04.427 00:07:04.427 Accel Perf Configuration: 00:07:04.427 Workload Type: crc32c 00:07:04.427 CRC-32C seed: 0 00:07:04.427 Transfer size: 4096 bytes 00:07:04.427 Vector count 2 00:07:04.427 Module: dsa 00:07:04.427 Queue depth: 32 00:07:04.427 Allocate depth: 32 00:07:04.427 # threads/core: 1 00:07:04.427 Run time: 1 seconds 00:07:04.427 Verify: Yes 00:07:04.427 00:07:04.427 Running for 1 seconds... 00:07:04.427 00:07:04.427 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.427 ------------------------------------------------------------------------------------ 00:07:04.427 0,0 246782/s 1927 MiB/s 0 0 00:07:04.427 ==================================================================================== 00:07:04.427 Total 246782/s 963 MiB/s 0 0' 00:07:04.427 20:24:22 -- accel/accel.sh@20 -- # IFS=: 00:07:04.427 20:24:22 -- accel/accel.sh@20 -- # read -r var val 00:07:04.427 20:24:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:04.427 20:24:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:04.427 20:24:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.427 20:24:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.427 20:24:22 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:04.427 20:24:22 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:04.427 20:24:22 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:04.427 20:24:22 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:04.427 20:24:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.427 20:24:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.427 20:24:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.427 20:24:22 -- accel/accel.sh@42 -- # jq -r . 00:07:04.427 [2024-04-26 20:24:22.189184] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:04.427 [2024-04-26 20:24:22.189311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346245 ] 00:07:04.427 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.427 [2024-04-26 20:24:22.307176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.427 [2024-04-26 20:24:22.401433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.427 [2024-04-26 20:24:22.405978] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:04.427 [2024-04-26 20:24:22.414015] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val= 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val= 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val=0x1 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val= 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val= 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val=crc32c 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val=0 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val= 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val=dsa 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val=32 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val=32 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val=1 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val=Yes 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val= 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.016 20:24:28 -- accel/accel.sh@21 -- # val= 00:07:11.016 20:24:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.016 20:24:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.574 20:24:31 -- accel/accel.sh@21 -- # val= 00:07:13.574 20:24:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.574 20:24:31 -- accel/accel.sh@21 -- # val= 00:07:13.574 20:24:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.574 20:24:31 -- accel/accel.sh@21 -- # val= 00:07:13.574 20:24:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.574 20:24:31 -- accel/accel.sh@21 -- # val= 00:07:13.574 20:24:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.574 20:24:31 -- accel/accel.sh@21 -- # val= 00:07:13.574 20:24:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.574 20:24:31 -- accel/accel.sh@21 -- # val= 00:07:13.574 20:24:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.574 20:24:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.574 20:24:31 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:13.574 20:24:31 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:13.574 20:24:31 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:13.574 00:07:13.574 real 0m19.390s 00:07:13.574 user 0m6.546s 00:07:13.574 sys 0m0.491s 00:07:13.574 20:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.574 20:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:13.574 ************************************ 00:07:13.574 END TEST accel_crc32c_C2 00:07:13.574 ************************************ 00:07:13.574 20:24:31 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:13.574 20:24:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:13.574 20:24:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.574 20:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:13.574 ************************************ 00:07:13.574 START TEST accel_copy 00:07:13.574 ************************************ 00:07:13.574 20:24:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:13.574 20:24:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.574 20:24:31 -- accel/accel.sh@17 -- # local accel_module 00:07:13.574 20:24:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:13.574 20:24:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:13.574 20:24:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.574 20:24:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.574 20:24:31 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:13.574 20:24:31 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:13.574 20:24:31 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:13.574 20:24:31 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:13.574 20:24:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.574 20:24:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.574 20:24:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.574 20:24:31 -- accel/accel.sh@42 -- # jq -r . 00:07:13.574 [2024-04-26 20:24:31.887937] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:13.574 [2024-04-26 20:24:31.888065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348181 ] 00:07:13.834 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.834 [2024-04-26 20:24:32.004110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.834 [2024-04-26 20:24:32.098114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.834 [2024-04-26 20:24:32.102666] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:13.834 [2024-04-26 20:24:32.110641] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:23.839 20:24:41 -- accel/accel.sh@18 -- # out=' 00:07:23.839 SPDK Configuration: 00:07:23.840 Core mask: 0x1 00:07:23.840 00:07:23.840 Accel Perf Configuration: 00:07:23.840 Workload Type: copy 00:07:23.840 Transfer size: 4096 bytes 00:07:23.840 Vector count 1 00:07:23.840 Module: dsa 00:07:23.840 Queue depth: 32 00:07:23.840 Allocate depth: 32 00:07:23.840 # threads/core: 1 00:07:23.840 Run time: 1 seconds 00:07:23.840 Verify: Yes 00:07:23.840 00:07:23.840 Running for 1 seconds... 00:07:23.840 00:07:23.840 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.840 ------------------------------------------------------------------------------------ 00:07:23.840 0,0 229056/s 894 MiB/s 0 0 00:07:23.840 ==================================================================================== 00:07:23.840 Total 229056/s 894 MiB/s 0 0' 00:07:23.840 20:24:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.840 20:24:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.840 20:24:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:23.840 20:24:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:23.840 20:24:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.840 20:24:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.840 20:24:41 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:23.840 20:24:41 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:23.840 20:24:41 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:23.840 20:24:41 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:23.840 20:24:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.840 20:24:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.840 20:24:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.840 20:24:41 -- accel/accel.sh@42 -- # jq -r . 00:07:23.840 [2024-04-26 20:24:41.557233] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:23.840 [2024-04-26 20:24:41.557361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349999 ] 00:07:23.840 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.840 [2024-04-26 20:24:41.674448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.840 [2024-04-26 20:24:41.769823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.840 [2024-04-26 20:24:41.774347] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:23.840 [2024-04-26 20:24:41.782331] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val= 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val= 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val=0x1 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val= 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val= 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val=copy 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val= 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val=dsa 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val=32 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val=32 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val=1 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val=Yes 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val= 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.571 20:24:48 -- accel/accel.sh@21 -- # val= 00:07:30.571 20:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.571 20:24:48 -- accel/accel.sh@20 -- # read -r var val 00:07:33.116 20:24:51 -- accel/accel.sh@21 -- # val= 00:07:33.116 20:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.116 20:24:51 -- accel/accel.sh@21 -- # val= 00:07:33.116 20:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.116 20:24:51 -- accel/accel.sh@21 -- # val= 00:07:33.116 20:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.116 20:24:51 -- accel/accel.sh@21 -- # val= 00:07:33.116 20:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.116 20:24:51 -- accel/accel.sh@21 -- # val= 00:07:33.116 20:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.116 20:24:51 -- accel/accel.sh@21 -- # val= 00:07:33.116 20:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.116 20:24:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.116 20:24:51 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:33.116 20:24:51 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:33.116 20:24:51 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:33.116 00:07:33.116 real 0m19.375s 00:07:33.116 user 0m6.528s 00:07:33.116 sys 0m0.481s 00:07:33.116 20:24:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.116 20:24:51 -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 ************************************ 00:07:33.116 END TEST accel_copy 00:07:33.116 ************************************ 00:07:33.116 20:24:51 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.116 20:24:51 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:33.116 20:24:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.116 20:24:51 -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 ************************************ 00:07:33.116 START TEST accel_fill 00:07:33.116 ************************************ 00:07:33.116 20:24:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.116 20:24:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.116 20:24:51 -- accel/accel.sh@17 -- # local accel_module 00:07:33.116 20:24:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.116 20:24:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.116 20:24:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.116 20:24:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.116 20:24:51 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:33.116 20:24:51 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:33.116 20:24:51 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:33.116 20:24:51 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:33.116 20:24:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.116 20:24:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.116 20:24:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.116 20:24:51 -- accel/accel.sh@42 -- # jq -r . 00:07:33.116 [2024-04-26 20:24:51.293226] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:33.116 [2024-04-26 20:24:51.293350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352125 ] 00:07:33.116 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.116 [2024-04-26 20:24:51.409045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.377 [2024-04-26 20:24:51.504613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.377 [2024-04-26 20:24:51.509155] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:33.377 [2024-04-26 20:24:51.517134] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:43.379 20:25:00 -- accel/accel.sh@18 -- # out=' 00:07:43.379 SPDK Configuration: 00:07:43.379 Core mask: 0x1 00:07:43.379 00:07:43.379 Accel Perf Configuration: 00:07:43.379 Workload Type: fill 00:07:43.379 Fill pattern: 0x80 00:07:43.379 Transfer size: 4096 bytes 00:07:43.379 Vector count 1 00:07:43.379 Module: dsa 00:07:43.379 Queue depth: 64 00:07:43.379 Allocate depth: 64 00:07:43.379 # threads/core: 1 00:07:43.379 Run time: 1 seconds 00:07:43.379 Verify: Yes 00:07:43.379 00:07:43.379 Running for 1 seconds... 00:07:43.379 00:07:43.379 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.379 ------------------------------------------------------------------------------------ 00:07:43.379 0,0 329065/s 1285 MiB/s 0 0 00:07:43.379 ==================================================================================== 00:07:43.379 Total 329065/s 1285 MiB/s 0 0' 00:07:43.379 20:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:43.379 20:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:43.379 20:25:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.379 20:25:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.379 20:25:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.379 20:25:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.379 20:25:00 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:43.379 20:25:00 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:43.379 20:25:00 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:43.379 20:25:00 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:43.379 20:25:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.379 20:25:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.379 20:25:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.379 20:25:00 -- accel/accel.sh@42 -- # jq -r . 00:07:43.379 [2024-04-26 20:25:00.976018] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:43.379 [2024-04-26 20:25:00.976142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353945 ] 00:07:43.379 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.379 [2024-04-26 20:25:01.073128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.379 [2024-04-26 20:25:01.166963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.379 [2024-04-26 20:25:01.171474] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:43.379 [2024-04-26 20:25:01.179456] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val= 00:07:49.955 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val= 00:07:49.955 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val=0x1 00:07:49.955 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val= 00:07:49.955 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val= 00:07:49.955 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val=fill 00:07:49.955 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.955 20:25:07 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val=0x80 00:07:49.955 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:49.955 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.955 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.955 20:25:07 -- accel/accel.sh@21 -- # val= 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.956 20:25:07 -- accel/accel.sh@21 -- # val=dsa 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.956 20:25:07 -- accel/accel.sh@21 -- # val=64 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.956 20:25:07 -- accel/accel.sh@21 -- # val=64 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.956 20:25:07 -- accel/accel.sh@21 -- # val=1 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.956 20:25:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.956 20:25:07 -- accel/accel.sh@21 -- # val=Yes 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.956 20:25:07 -- accel/accel.sh@21 -- # val= 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:49.956 20:25:07 -- accel/accel.sh@21 -- # val= 00:07:49.956 20:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # IFS=: 00:07:49.956 20:25:07 -- accel/accel.sh@20 -- # read -r var val 00:07:52.494 20:25:10 -- accel/accel.sh@21 -- # val= 00:07:52.494 20:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # IFS=: 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # read -r var val 00:07:52.494 20:25:10 -- accel/accel.sh@21 -- # val= 00:07:52.494 20:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # IFS=: 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # read -r var val 00:07:52.494 20:25:10 -- accel/accel.sh@21 -- # val= 00:07:52.494 20:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # IFS=: 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # read -r var val 00:07:52.494 20:25:10 -- accel/accel.sh@21 -- # val= 00:07:52.494 20:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # IFS=: 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # read -r var val 00:07:52.494 20:25:10 -- accel/accel.sh@21 -- # val= 00:07:52.494 20:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # IFS=: 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # read -r var val 00:07:52.494 20:25:10 -- accel/accel.sh@21 -- # val= 00:07:52.494 20:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # IFS=: 00:07:52.494 20:25:10 -- accel/accel.sh@20 -- # read -r var val 00:07:52.494 20:25:10 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:52.494 20:25:10 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:52.494 20:25:10 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:52.494 00:07:52.494 real 0m19.351s 00:07:52.495 user 0m6.538s 00:07:52.495 sys 0m0.458s 00:07:52.495 20:25:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.495 20:25:10 -- common/autotest_common.sh@10 -- # set +x 00:07:52.495 ************************************ 00:07:52.495 END TEST accel_fill 00:07:52.495 ************************************ 00:07:52.495 20:25:10 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:52.495 20:25:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:52.495 20:25:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.495 20:25:10 -- common/autotest_common.sh@10 -- # set +x 00:07:52.495 ************************************ 00:07:52.495 START TEST accel_copy_crc32c 00:07:52.495 ************************************ 00:07:52.495 20:25:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:52.495 20:25:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.495 20:25:10 -- accel/accel.sh@17 -- # local accel_module 00:07:52.495 20:25:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:52.495 20:25:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:52.495 20:25:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.495 20:25:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.495 20:25:10 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:52.495 20:25:10 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:52.495 20:25:10 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:52.495 20:25:10 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:52.495 20:25:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.495 20:25:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.495 20:25:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.495 20:25:10 -- accel/accel.sh@42 -- # jq -r . 00:07:52.495 [2024-04-26 20:25:10.681020] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:52.495 [2024-04-26 20:25:10.681148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355890 ] 00:07:52.495 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.495 [2024-04-26 20:25:10.796264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.754 [2024-04-26 20:25:10.895116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.754 [2024-04-26 20:25:10.899718] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:52.754 [2024-04-26 20:25:10.907709] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:02.775 20:25:20 -- accel/accel.sh@18 -- # out=' 00:08:02.775 SPDK Configuration: 00:08:02.775 Core mask: 0x1 00:08:02.775 00:08:02.775 Accel Perf Configuration: 00:08:02.775 Workload Type: copy_crc32c 00:08:02.775 CRC-32C seed: 0 00:08:02.775 Vector size: 4096 bytes 00:08:02.775 Transfer size: 4096 bytes 00:08:02.775 Vector count 1 00:08:02.775 Module: dsa 00:08:02.775 Queue depth: 32 00:08:02.775 Allocate depth: 32 00:08:02.775 # threads/core: 1 00:08:02.775 Run time: 1 seconds 00:08:02.775 Verify: Yes 00:08:02.775 00:08:02.775 Running for 1 seconds... 00:08:02.775 00:08:02.775 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.775 ------------------------------------------------------------------------------------ 00:08:02.775 0,0 207296/s 809 MiB/s 0 0 00:08:02.775 ==================================================================================== 00:08:02.775 Total 207296/s 809 MiB/s 0 0' 00:08:02.775 20:25:20 -- accel/accel.sh@20 -- # IFS=: 00:08:02.775 20:25:20 -- accel/accel.sh@20 -- # read -r var val 00:08:02.775 20:25:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:02.775 20:25:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:02.775 20:25:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.775 20:25:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.775 20:25:20 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:02.775 20:25:20 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:02.775 20:25:20 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:02.775 20:25:20 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:02.775 20:25:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.775 20:25:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.776 20:25:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.776 20:25:20 -- accel/accel.sh@42 -- # jq -r . 00:08:02.776 [2024-04-26 20:25:20.378948] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:02.776 [2024-04-26 20:25:20.379076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3357889 ] 00:08:02.776 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.776 [2024-04-26 20:25:20.485997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.776 [2024-04-26 20:25:20.581491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.776 [2024-04-26 20:25:20.586040] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:02.776 [2024-04-26 20:25:20.594019] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val= 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val= 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val=0x1 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val= 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val= 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val=0 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val= 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val=dsa 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val=32 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val=32 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val=1 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val=Yes 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val= 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.368 20:25:26 -- accel/accel.sh@21 -- # val= 00:08:09.368 20:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.368 20:25:26 -- accel/accel.sh@20 -- # read -r var val 00:08:11.915 20:25:29 -- accel/accel.sh@21 -- # val= 00:08:11.915 20:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # IFS=: 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # read -r var val 00:08:11.915 20:25:29 -- accel/accel.sh@21 -- # val= 00:08:11.915 20:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # IFS=: 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # read -r var val 00:08:11.915 20:25:29 -- accel/accel.sh@21 -- # val= 00:08:11.915 20:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # IFS=: 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # read -r var val 00:08:11.915 20:25:29 -- accel/accel.sh@21 -- # val= 00:08:11.915 20:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # IFS=: 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # read -r var val 00:08:11.915 20:25:29 -- accel/accel.sh@21 -- # val= 00:08:11.915 20:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # IFS=: 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # read -r var val 00:08:11.915 20:25:29 -- accel/accel.sh@21 -- # val= 00:08:11.915 20:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # IFS=: 00:08:11.915 20:25:29 -- accel/accel.sh@20 -- # read -r var val 00:08:11.915 20:25:30 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:11.915 20:25:30 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:11.915 20:25:30 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:11.915 00:08:11.915 real 0m19.366s 00:08:11.915 user 0m6.546s 00:08:11.915 sys 0m0.462s 00:08:11.915 20:25:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.915 20:25:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.915 ************************************ 00:08:11.915 END TEST accel_copy_crc32c 00:08:11.915 ************************************ 00:08:11.915 20:25:30 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:11.915 20:25:30 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:11.915 20:25:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.915 20:25:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.915 ************************************ 00:08:11.915 START TEST accel_copy_crc32c_C2 00:08:11.915 ************************************ 00:08:11.915 20:25:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:11.915 20:25:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:11.915 20:25:30 -- accel/accel.sh@17 -- # local accel_module 00:08:11.915 20:25:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:11.915 20:25:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:11.915 20:25:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.915 20:25:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.915 20:25:30 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:11.915 20:25:30 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:11.915 20:25:30 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:11.915 20:25:30 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:11.915 20:25:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.915 20:25:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.915 20:25:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.915 20:25:30 -- accel/accel.sh@42 -- # jq -r . 00:08:11.915 [2024-04-26 20:25:30.091483] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:11.915 [2024-04-26 20:25:30.091623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3359728 ] 00:08:11.915 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.915 [2024-04-26 20:25:30.223115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.176 [2024-04-26 20:25:30.323533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.176 [2024-04-26 20:25:30.328170] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:12.176 [2024-04-26 20:25:30.336140] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:22.233 20:25:39 -- accel/accel.sh@18 -- # out=' 00:08:22.233 SPDK Configuration: 00:08:22.233 Core mask: 0x1 00:08:22.233 00:08:22.233 Accel Perf Configuration: 00:08:22.233 Workload Type: copy_crc32c 00:08:22.233 CRC-32C seed: 0 00:08:22.233 Vector size: 4096 bytes 00:08:22.233 Transfer size: 8192 bytes 00:08:22.233 Vector count 2 00:08:22.233 Module: dsa 00:08:22.233 Queue depth: 32 00:08:22.233 Allocate depth: 32 00:08:22.233 # threads/core: 1 00:08:22.234 Run time: 1 seconds 00:08:22.234 Verify: Yes 00:08:22.234 00:08:22.234 Running for 1 seconds... 00:08:22.234 00:08:22.234 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:22.234 ------------------------------------------------------------------------------------ 00:08:22.234 0,0 139970/s 1093 MiB/s 0 0 00:08:22.234 ==================================================================================== 00:08:22.234 Total 139970/s 546 MiB/s 0 0' 00:08:22.234 20:25:39 -- accel/accel.sh@20 -- # IFS=: 00:08:22.234 20:25:39 -- accel/accel.sh@20 -- # read -r var val 00:08:22.234 20:25:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:22.234 20:25:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:22.234 20:25:39 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.234 20:25:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:22.234 20:25:39 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:22.234 20:25:39 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:22.234 20:25:39 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:22.234 20:25:39 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:22.234 20:25:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:22.234 20:25:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:22.234 20:25:39 -- accel/accel.sh@41 -- # local IFS=, 00:08:22.234 20:25:39 -- accel/accel.sh@42 -- # jq -r . 00:08:22.234 [2024-04-26 20:25:39.872845] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:22.234 [2024-04-26 20:25:39.872975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3361776 ] 00:08:22.234 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.234 [2024-04-26 20:25:39.989649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.234 [2024-04-26 20:25:40.106128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.234 [2024-04-26 20:25:40.110721] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:22.234 [2024-04-26 20:25:40.118700] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val= 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val= 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val=0x1 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val= 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val= 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val=0 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val= 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val=dsa 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val=32 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val=32 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val=1 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val=Yes 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val= 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:28.820 20:25:46 -- accel/accel.sh@21 -- # val= 00:08:28.820 20:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # IFS=: 00:08:28.820 20:25:46 -- accel/accel.sh@20 -- # read -r var val 00:08:31.361 20:25:49 -- accel/accel.sh@21 -- # val= 00:08:31.361 20:25:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # IFS=: 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # read -r var val 00:08:31.361 20:25:49 -- accel/accel.sh@21 -- # val= 00:08:31.361 20:25:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # IFS=: 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # read -r var val 00:08:31.361 20:25:49 -- accel/accel.sh@21 -- # val= 00:08:31.361 20:25:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # IFS=: 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # read -r var val 00:08:31.361 20:25:49 -- accel/accel.sh@21 -- # val= 00:08:31.361 20:25:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # IFS=: 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # read -r var val 00:08:31.361 20:25:49 -- accel/accel.sh@21 -- # val= 00:08:31.361 20:25:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # IFS=: 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # read -r var val 00:08:31.361 20:25:49 -- accel/accel.sh@21 -- # val= 00:08:31.361 20:25:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # IFS=: 00:08:31.361 20:25:49 -- accel/accel.sh@20 -- # read -r var val 00:08:31.361 20:25:49 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:31.361 20:25:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:31.361 20:25:49 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:31.361 00:08:31.361 real 0m19.471s 00:08:31.361 user 0m6.582s 00:08:31.361 sys 0m0.527s 00:08:31.361 20:25:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.361 20:25:49 -- common/autotest_common.sh@10 -- # set +x 00:08:31.361 ************************************ 00:08:31.361 END TEST accel_copy_crc32c_C2 00:08:31.361 ************************************ 00:08:31.361 20:25:49 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:31.361 20:25:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:31.361 20:25:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.361 20:25:49 -- common/autotest_common.sh@10 -- # set +x 00:08:31.361 ************************************ 00:08:31.361 START TEST accel_dualcast 00:08:31.361 ************************************ 00:08:31.361 20:25:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:31.361 20:25:49 -- accel/accel.sh@16 -- # local accel_opc 00:08:31.361 20:25:49 -- accel/accel.sh@17 -- # local accel_module 00:08:31.361 20:25:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:31.361 20:25:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:31.361 20:25:49 -- accel/accel.sh@12 -- # build_accel_config 00:08:31.361 20:25:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:31.361 20:25:49 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:31.361 20:25:49 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:31.361 20:25:49 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:31.361 20:25:49 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:31.361 20:25:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:31.361 20:25:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:31.361 20:25:49 -- accel/accel.sh@41 -- # local IFS=, 00:08:31.361 20:25:49 -- accel/accel.sh@42 -- # jq -r . 00:08:31.361 [2024-04-26 20:25:49.589169] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:31.361 [2024-04-26 20:25:49.589289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363674 ] 00:08:31.361 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.361 [2024-04-26 20:25:49.700240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.620 [2024-04-26 20:25:49.794805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.620 [2024-04-26 20:25:49.799294] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:31.620 [2024-04-26 20:25:49.807280] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:41.606 20:25:59 -- accel/accel.sh@18 -- # out=' 00:08:41.606 SPDK Configuration: 00:08:41.606 Core mask: 0x1 00:08:41.606 00:08:41.606 Accel Perf Configuration: 00:08:41.606 Workload Type: dualcast 00:08:41.606 Transfer size: 4096 bytes 00:08:41.606 Vector count 1 00:08:41.606 Module: dsa 00:08:41.606 Queue depth: 32 00:08:41.606 Allocate depth: 32 00:08:41.606 # threads/core: 1 00:08:41.606 Run time: 1 seconds 00:08:41.606 Verify: Yes 00:08:41.606 00:08:41.606 Running for 1 seconds... 00:08:41.606 00:08:41.606 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:41.606 ------------------------------------------------------------------------------------ 00:08:41.606 0,0 219712/s 858 MiB/s 0 0 00:08:41.606 ==================================================================================== 00:08:41.606 Total 219712/s 858 MiB/s 0 0' 00:08:41.606 20:25:59 -- accel/accel.sh@20 -- # IFS=: 00:08:41.606 20:25:59 -- accel/accel.sh@20 -- # read -r var val 00:08:41.606 20:25:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:41.606 20:25:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:41.606 20:25:59 -- accel/accel.sh@12 -- # build_accel_config 00:08:41.606 20:25:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:41.606 20:25:59 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:41.606 20:25:59 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:41.606 20:25:59 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:41.606 20:25:59 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:41.606 20:25:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:41.606 20:25:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:41.606 20:25:59 -- accel/accel.sh@41 -- # local IFS=, 00:08:41.606 20:25:59 -- accel/accel.sh@42 -- # jq -r . 00:08:41.606 [2024-04-26 20:25:59.261040] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:41.607 [2024-04-26 20:25:59.261155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3365592 ] 00:08:41.607 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.607 [2024-04-26 20:25:59.369973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.607 [2024-04-26 20:25:59.465516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.607 [2024-04-26 20:25:59.470030] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:41.607 [2024-04-26 20:25:59.478014] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val= 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val= 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val=0x1 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val= 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val= 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val=dualcast 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val= 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val=dsa 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val=32 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val=32 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val=1 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val=Yes 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val= 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:48.186 20:26:05 -- accel/accel.sh@21 -- # val= 00:08:48.186 20:26:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # IFS=: 00:08:48.186 20:26:05 -- accel/accel.sh@20 -- # read -r var val 00:08:50.730 20:26:08 -- accel/accel.sh@21 -- # val= 00:08:50.730 20:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # IFS=: 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # read -r var val 00:08:50.730 20:26:08 -- accel/accel.sh@21 -- # val= 00:08:50.730 20:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # IFS=: 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # read -r var val 00:08:50.730 20:26:08 -- accel/accel.sh@21 -- # val= 00:08:50.730 20:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # IFS=: 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # read -r var val 00:08:50.730 20:26:08 -- accel/accel.sh@21 -- # val= 00:08:50.730 20:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # IFS=: 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # read -r var val 00:08:50.730 20:26:08 -- accel/accel.sh@21 -- # val= 00:08:50.730 20:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # IFS=: 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # read -r var val 00:08:50.730 20:26:08 -- accel/accel.sh@21 -- # val= 00:08:50.730 20:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # IFS=: 00:08:50.730 20:26:08 -- accel/accel.sh@20 -- # read -r var val 00:08:50.730 20:26:08 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:50.730 20:26:08 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:50.730 20:26:08 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:50.730 00:08:50.730 real 0m19.326s 00:08:50.730 user 0m6.514s 00:08:50.730 sys 0m0.454s 00:08:50.730 20:26:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.730 20:26:08 -- common/autotest_common.sh@10 -- # set +x 00:08:50.730 ************************************ 00:08:50.730 END TEST accel_dualcast 00:08:50.730 ************************************ 00:08:50.730 20:26:08 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:50.730 20:26:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:50.730 20:26:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.730 20:26:08 -- common/autotest_common.sh@10 -- # set +x 00:08:50.730 ************************************ 00:08:50.730 START TEST accel_compare 00:08:50.730 ************************************ 00:08:50.730 20:26:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:50.730 20:26:08 -- accel/accel.sh@16 -- # local accel_opc 00:08:50.730 20:26:08 -- accel/accel.sh@17 -- # local accel_module 00:08:50.730 20:26:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:50.730 20:26:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:50.730 20:26:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:50.730 20:26:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:50.730 20:26:08 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:50.730 20:26:08 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:50.730 20:26:08 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:50.730 20:26:08 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:50.730 20:26:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:50.730 20:26:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:50.730 20:26:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:50.730 20:26:08 -- accel/accel.sh@42 -- # jq -r . 00:08:50.730 [2024-04-26 20:26:08.947718] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:50.730 [2024-04-26 20:26:08.947836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3367612 ] 00:08:50.730 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.730 [2024-04-26 20:26:09.051415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.992 [2024-04-26 20:26:09.146630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.992 [2024-04-26 20:26:09.151163] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:50.992 [2024-04-26 20:26:09.159144] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:00.980 20:26:18 -- accel/accel.sh@18 -- # out=' 00:09:00.980 SPDK Configuration: 00:09:00.980 Core mask: 0x1 00:09:00.980 00:09:00.980 Accel Perf Configuration: 00:09:00.980 Workload Type: compare 00:09:00.980 Transfer size: 4096 bytes 00:09:00.980 Vector count 1 00:09:00.980 Module: dsa 00:09:00.980 Queue depth: 32 00:09:00.980 Allocate depth: 32 00:09:00.980 # threads/core: 1 00:09:00.980 Run time: 1 seconds 00:09:00.980 Verify: Yes 00:09:00.980 00:09:00.980 Running for 1 seconds... 00:09:00.980 00:09:00.980 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:00.980 ------------------------------------------------------------------------------------ 00:09:00.980 0,0 234688/s 916 MiB/s 0 0 00:09:00.980 ==================================================================================== 00:09:00.980 Total 234688/s 916 MiB/s 0 0' 00:09:00.980 20:26:18 -- accel/accel.sh@20 -- # IFS=: 00:09:00.980 20:26:18 -- accel/accel.sh@20 -- # read -r var val 00:09:00.980 20:26:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:00.980 20:26:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:00.980 20:26:18 -- accel/accel.sh@12 -- # build_accel_config 00:09:00.980 20:26:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:00.980 20:26:18 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:00.980 20:26:18 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:00.980 20:26:18 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:00.980 20:26:18 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:00.980 20:26:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:00.981 20:26:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:00.981 20:26:18 -- accel/accel.sh@41 -- # local IFS=, 00:09:00.981 20:26:18 -- accel/accel.sh@42 -- # jq -r . 00:09:00.981 [2024-04-26 20:26:18.600903] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:00.981 [2024-04-26 20:26:18.601029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369452 ] 00:09:00.981 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.981 [2024-04-26 20:26:18.717048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.981 [2024-04-26 20:26:18.812716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.981 [2024-04-26 20:26:18.817274] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:00.981 [2024-04-26 20:26:18.825251] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val= 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val= 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val=0x1 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val= 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val= 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val=compare 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@24 -- # accel_opc=compare 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val= 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val=dsa 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@23 -- # accel_module=dsa 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val=32 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val=32 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val=1 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val=Yes 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val= 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:07.557 20:26:25 -- accel/accel.sh@21 -- # val= 00:09:07.557 20:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # IFS=: 00:09:07.557 20:26:25 -- accel/accel.sh@20 -- # read -r var val 00:09:10.101 20:26:28 -- accel/accel.sh@21 -- # val= 00:09:10.101 20:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # IFS=: 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # read -r var val 00:09:10.101 20:26:28 -- accel/accel.sh@21 -- # val= 00:09:10.101 20:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # IFS=: 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # read -r var val 00:09:10.101 20:26:28 -- accel/accel.sh@21 -- # val= 00:09:10.101 20:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # IFS=: 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # read -r var val 00:09:10.101 20:26:28 -- accel/accel.sh@21 -- # val= 00:09:10.101 20:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # IFS=: 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # read -r var val 00:09:10.101 20:26:28 -- accel/accel.sh@21 -- # val= 00:09:10.101 20:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # IFS=: 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # read -r var val 00:09:10.101 20:26:28 -- accel/accel.sh@21 -- # val= 00:09:10.101 20:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # IFS=: 00:09:10.101 20:26:28 -- accel/accel.sh@20 -- # read -r var val 00:09:10.101 20:26:28 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:09:10.101 20:26:28 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:09:10.101 20:26:28 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:09:10.101 00:09:10.101 real 0m19.336s 00:09:10.101 user 0m6.495s 00:09:10.101 sys 0m0.483s 00:09:10.101 20:26:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.101 20:26:28 -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 ************************************ 00:09:10.101 END TEST accel_compare 00:09:10.101 ************************************ 00:09:10.101 20:26:28 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:10.101 20:26:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:10.101 20:26:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.101 20:26:28 -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 ************************************ 00:09:10.101 START TEST accel_xor 00:09:10.101 ************************************ 00:09:10.101 20:26:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:09:10.101 20:26:28 -- accel/accel.sh@16 -- # local accel_opc 00:09:10.101 20:26:28 -- accel/accel.sh@17 -- # local accel_module 00:09:10.101 20:26:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:09:10.101 20:26:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:10.101 20:26:28 -- accel/accel.sh@12 -- # build_accel_config 00:09:10.101 20:26:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:10.101 20:26:28 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:10.101 20:26:28 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:10.101 20:26:28 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:10.101 20:26:28 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:10.101 20:26:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:10.101 20:26:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:10.101 20:26:28 -- accel/accel.sh@41 -- # local IFS=, 00:09:10.101 20:26:28 -- accel/accel.sh@42 -- # jq -r . 00:09:10.101 [2024-04-26 20:26:28.318644] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:10.101 [2024-04-26 20:26:28.318764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371550 ] 00:09:10.101 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.101 [2024-04-26 20:26:28.431701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.361 [2024-04-26 20:26:28.525399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.361 [2024-04-26 20:26:28.529935] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:10.361 [2024-04-26 20:26:28.537918] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:20.357 20:26:37 -- accel/accel.sh@18 -- # out=' 00:09:20.357 SPDK Configuration: 00:09:20.357 Core mask: 0x1 00:09:20.357 00:09:20.357 Accel Perf Configuration: 00:09:20.357 Workload Type: xor 00:09:20.357 Source buffers: 2 00:09:20.357 Transfer size: 4096 bytes 00:09:20.357 Vector count 1 00:09:20.357 Module: software 00:09:20.357 Queue depth: 32 00:09:20.357 Allocate depth: 32 00:09:20.357 # threads/core: 1 00:09:20.357 Run time: 1 seconds 00:09:20.357 Verify: Yes 00:09:20.357 00:09:20.357 Running for 1 seconds... 00:09:20.357 00:09:20.357 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:20.357 ------------------------------------------------------------------------------------ 00:09:20.357 0,0 446816/s 1745 MiB/s 0 0 00:09:20.357 ==================================================================================== 00:09:20.357 Total 446816/s 1745 MiB/s 0 0' 00:09:20.357 20:26:37 -- accel/accel.sh@20 -- # IFS=: 00:09:20.357 20:26:37 -- accel/accel.sh@20 -- # read -r var val 00:09:20.357 20:26:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:20.357 20:26:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:20.357 20:26:37 -- accel/accel.sh@12 -- # build_accel_config 00:09:20.357 20:26:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:20.357 20:26:37 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:20.357 20:26:37 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:20.357 20:26:37 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:20.357 20:26:37 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:20.357 20:26:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:20.357 20:26:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:20.357 20:26:37 -- accel/accel.sh@41 -- # local IFS=, 00:09:20.357 20:26:37 -- accel/accel.sh@42 -- # jq -r . 00:09:20.357 [2024-04-26 20:26:37.987802] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:20.357 [2024-04-26 20:26:37.987932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373379 ] 00:09:20.357 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.357 [2024-04-26 20:26:38.103141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.357 [2024-04-26 20:26:38.197857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.357 [2024-04-26 20:26:38.202406] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:20.357 [2024-04-26 20:26:38.210386] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val= 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val= 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val=0x1 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val= 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val= 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val=xor 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val=2 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val= 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val=software 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@23 -- # accel_module=software 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val=32 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val=32 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val=1 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val=Yes 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val= 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:26.936 20:26:44 -- accel/accel.sh@21 -- # val= 00:09:26.936 20:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # IFS=: 00:09:26.936 20:26:44 -- accel/accel.sh@20 -- # read -r var val 00:09:29.621 20:26:47 -- accel/accel.sh@21 -- # val= 00:09:29.621 20:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # IFS=: 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # read -r var val 00:09:29.621 20:26:47 -- accel/accel.sh@21 -- # val= 00:09:29.621 20:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # IFS=: 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # read -r var val 00:09:29.621 20:26:47 -- accel/accel.sh@21 -- # val= 00:09:29.621 20:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # IFS=: 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # read -r var val 00:09:29.621 20:26:47 -- accel/accel.sh@21 -- # val= 00:09:29.621 20:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # IFS=: 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # read -r var val 00:09:29.621 20:26:47 -- accel/accel.sh@21 -- # val= 00:09:29.621 20:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # IFS=: 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # read -r var val 00:09:29.621 20:26:47 -- accel/accel.sh@21 -- # val= 00:09:29.621 20:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # IFS=: 00:09:29.621 20:26:47 -- accel/accel.sh@20 -- # read -r var val 00:09:29.621 20:26:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:29.621 20:26:47 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:29.621 20:26:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:29.621 00:09:29.621 real 0m19.361s 00:09:29.621 user 0m6.541s 00:09:29.621 sys 0m0.458s 00:09:29.621 20:26:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.621 20:26:47 -- common/autotest_common.sh@10 -- # set +x 00:09:29.621 ************************************ 00:09:29.621 END TEST accel_xor 00:09:29.621 ************************************ 00:09:29.621 20:26:47 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:29.621 20:26:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:29.621 20:26:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.621 20:26:47 -- common/autotest_common.sh@10 -- # set +x 00:09:29.621 ************************************ 00:09:29.621 START TEST accel_xor 00:09:29.621 ************************************ 00:09:29.621 20:26:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:09:29.621 20:26:47 -- accel/accel.sh@16 -- # local accel_opc 00:09:29.621 20:26:47 -- accel/accel.sh@17 -- # local accel_module 00:09:29.621 20:26:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:09:29.621 20:26:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:29.621 20:26:47 -- accel/accel.sh@12 -- # build_accel_config 00:09:29.621 20:26:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:29.621 20:26:47 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:29.621 20:26:47 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:29.621 20:26:47 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:29.621 20:26:47 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:29.621 20:26:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:29.621 20:26:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:29.621 20:26:47 -- accel/accel.sh@41 -- # local IFS=, 00:09:29.621 20:26:47 -- accel/accel.sh@42 -- # jq -r . 00:09:29.621 [2024-04-26 20:26:47.695940] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:29.621 [2024-04-26 20:26:47.696022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375461 ] 00:09:29.621 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.621 [2024-04-26 20:26:47.780953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.621 [2024-04-26 20:26:47.876110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.621 [2024-04-26 20:26:47.880627] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:29.621 [2024-04-26 20:26:47.888605] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:39.606 20:26:57 -- accel/accel.sh@18 -- # out=' 00:09:39.606 SPDK Configuration: 00:09:39.606 Core mask: 0x1 00:09:39.606 00:09:39.606 Accel Perf Configuration: 00:09:39.606 Workload Type: xor 00:09:39.606 Source buffers: 3 00:09:39.606 Transfer size: 4096 bytes 00:09:39.606 Vector count 1 00:09:39.606 Module: software 00:09:39.606 Queue depth: 32 00:09:39.606 Allocate depth: 32 00:09:39.606 # threads/core: 1 00:09:39.606 Run time: 1 seconds 00:09:39.606 Verify: Yes 00:09:39.606 00:09:39.606 Running for 1 seconds... 00:09:39.606 00:09:39.606 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:39.606 ------------------------------------------------------------------------------------ 00:09:39.606 0,0 428704/s 1674 MiB/s 0 0 00:09:39.606 ==================================================================================== 00:09:39.606 Total 428704/s 1674 MiB/s 0 0' 00:09:39.606 20:26:57 -- accel/accel.sh@20 -- # IFS=: 00:09:39.606 20:26:57 -- accel/accel.sh@20 -- # read -r var val 00:09:39.606 20:26:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:39.606 20:26:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:39.606 20:26:57 -- accel/accel.sh@12 -- # build_accel_config 00:09:39.606 20:26:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:39.606 20:26:57 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:39.606 20:26:57 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:39.606 20:26:57 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:39.606 20:26:57 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:39.606 20:26:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:39.606 20:26:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:39.606 20:26:57 -- accel/accel.sh@41 -- # local IFS=, 00:09:39.606 20:26:57 -- accel/accel.sh@42 -- # jq -r . 00:09:39.606 [2024-04-26 20:26:57.331984] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:39.606 [2024-04-26 20:26:57.332103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377318 ] 00:09:39.606 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.606 [2024-04-26 20:26:57.444356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.606 [2024-04-26 20:26:57.539010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.606 [2024-04-26 20:26:57.543498] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:39.606 [2024-04-26 20:26:57.551486] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:46.187 20:27:03 -- accel/accel.sh@21 -- # val= 00:09:46.187 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.187 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.187 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.187 20:27:03 -- accel/accel.sh@21 -- # val= 00:09:46.187 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val=0x1 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val= 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val= 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val=xor 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val=3 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val= 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val=software 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@23 -- # accel_module=software 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val=32 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val=32 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val=1 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val=Yes 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val= 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:46.188 20:27:03 -- accel/accel.sh@21 -- # val= 00:09:46.188 20:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # IFS=: 00:09:46.188 20:27:03 -- accel/accel.sh@20 -- # read -r var val 00:09:48.733 20:27:06 -- accel/accel.sh@21 -- # val= 00:09:48.733 20:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # IFS=: 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # read -r var val 00:09:48.733 20:27:06 -- accel/accel.sh@21 -- # val= 00:09:48.733 20:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # IFS=: 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # read -r var val 00:09:48.733 20:27:06 -- accel/accel.sh@21 -- # val= 00:09:48.733 20:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # IFS=: 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # read -r var val 00:09:48.733 20:27:06 -- accel/accel.sh@21 -- # val= 00:09:48.733 20:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # IFS=: 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # read -r var val 00:09:48.733 20:27:06 -- accel/accel.sh@21 -- # val= 00:09:48.733 20:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # IFS=: 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # read -r var val 00:09:48.733 20:27:06 -- accel/accel.sh@21 -- # val= 00:09:48.733 20:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # IFS=: 00:09:48.733 20:27:06 -- accel/accel.sh@20 -- # read -r var val 00:09:48.733 20:27:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:48.733 20:27:06 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:48.733 20:27:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:48.733 00:09:48.733 real 0m19.279s 00:09:48.733 user 0m6.496s 00:09:48.733 sys 0m0.416s 00:09:48.733 20:27:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.733 20:27:06 -- common/autotest_common.sh@10 -- # set +x 00:09:48.733 ************************************ 00:09:48.733 END TEST accel_xor 00:09:48.733 ************************************ 00:09:48.733 20:27:06 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:48.733 20:27:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:48.733 20:27:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.733 20:27:06 -- common/autotest_common.sh@10 -- # set +x 00:09:48.733 ************************************ 00:09:48.733 START TEST accel_dif_verify 00:09:48.733 ************************************ 00:09:48.733 20:27:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:09:48.733 20:27:06 -- accel/accel.sh@16 -- # local accel_opc 00:09:48.733 20:27:06 -- accel/accel.sh@17 -- # local accel_module 00:09:48.733 20:27:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:09:48.733 20:27:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:48.733 20:27:06 -- accel/accel.sh@12 -- # build_accel_config 00:09:48.733 20:27:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:48.733 20:27:06 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:48.733 20:27:06 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:48.733 20:27:06 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:48.733 20:27:06 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:48.733 20:27:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:48.733 20:27:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:48.733 20:27:06 -- accel/accel.sh@41 -- # local IFS=, 00:09:48.733 20:27:06 -- accel/accel.sh@42 -- # jq -r . 00:09:48.733 [2024-04-26 20:27:07.022090] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:48.733 [2024-04-26 20:27:07.022218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379670 ] 00:09:48.993 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.993 [2024-04-26 20:27:07.141413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.993 [2024-04-26 20:27:07.239687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.993 [2024-04-26 20:27:07.244220] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:48.993 [2024-04-26 20:27:07.252200] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:58.993 20:27:16 -- accel/accel.sh@18 -- # out=' 00:09:58.993 SPDK Configuration: 00:09:58.993 Core mask: 0x1 00:09:58.993 00:09:58.993 Accel Perf Configuration: 00:09:58.993 Workload Type: dif_verify 00:09:58.993 Vector size: 4096 bytes 00:09:58.993 Transfer size: 4096 bytes 00:09:58.993 Block size: 512 bytes 00:09:58.993 Metadata size: 8 bytes 00:09:58.993 Vector count 1 00:09:58.993 Module: dsa 00:09:58.993 Queue depth: 32 00:09:58.993 Allocate depth: 32 00:09:58.993 # threads/core: 1 00:09:58.993 Run time: 1 seconds 00:09:58.993 Verify: No 00:09:58.993 00:09:58.993 Running for 1 seconds... 00:09:58.993 00:09:58.993 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:58.993 ------------------------------------------------------------------------------------ 00:09:58.993 0,0 363009/s 1440 MiB/s 0 0 00:09:58.993 ==================================================================================== 00:09:58.993 Total 363009/s 1418 MiB/s 0 0' 00:09:58.993 20:27:16 -- accel/accel.sh@20 -- # IFS=: 00:09:58.993 20:27:16 -- accel/accel.sh@20 -- # read -r var val 00:09:58.993 20:27:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:58.993 20:27:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:58.993 20:27:16 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.993 20:27:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.993 20:27:16 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:58.993 20:27:16 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:58.993 20:27:16 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:58.993 20:27:16 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:58.993 20:27:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.993 20:27:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.993 20:27:16 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.993 20:27:16 -- accel/accel.sh@42 -- # jq -r . 00:09:58.993 [2024-04-26 20:27:16.701303] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:58.993 [2024-04-26 20:27:16.701438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381730 ] 00:09:58.993 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.993 [2024-04-26 20:27:16.817134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.993 [2024-04-26 20:27:16.911122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.993 [2024-04-26 20:27:16.915660] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:58.993 [2024-04-26 20:27:16.923640] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val= 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val= 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val=0x1 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val= 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val= 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val=dif_verify 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val= 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val=dsa 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@23 -- # accel_module=dsa 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val=32 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val=32 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val=1 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val=No 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val= 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:05.570 20:27:23 -- accel/accel.sh@21 -- # val= 00:10:05.570 20:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # IFS=: 00:10:05.570 20:27:23 -- accel/accel.sh@20 -- # read -r var val 00:10:08.111 20:27:26 -- accel/accel.sh@21 -- # val= 00:10:08.111 20:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # IFS=: 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # read -r var val 00:10:08.111 20:27:26 -- accel/accel.sh@21 -- # val= 00:10:08.111 20:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # IFS=: 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # read -r var val 00:10:08.111 20:27:26 -- accel/accel.sh@21 -- # val= 00:10:08.111 20:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # IFS=: 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # read -r var val 00:10:08.111 20:27:26 -- accel/accel.sh@21 -- # val= 00:10:08.111 20:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # IFS=: 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # read -r var val 00:10:08.111 20:27:26 -- accel/accel.sh@21 -- # val= 00:10:08.111 20:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # IFS=: 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # read -r var val 00:10:08.111 20:27:26 -- accel/accel.sh@21 -- # val= 00:10:08.111 20:27:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # IFS=: 00:10:08.111 20:27:26 -- accel/accel.sh@20 -- # read -r var val 00:10:08.111 20:27:26 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:10:08.111 20:27:26 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:08.111 20:27:26 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:10:08.111 00:10:08.111 real 0m19.354s 00:10:08.111 user 0m6.533s 00:10:08.111 sys 0m0.468s 00:10:08.111 20:27:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.111 20:27:26 -- common/autotest_common.sh@10 -- # set +x 00:10:08.111 ************************************ 00:10:08.111 END TEST accel_dif_verify 00:10:08.111 ************************************ 00:10:08.111 20:27:26 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:08.111 20:27:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:08.111 20:27:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:08.111 20:27:26 -- common/autotest_common.sh@10 -- # set +x 00:10:08.111 ************************************ 00:10:08.111 START TEST accel_dif_generate 00:10:08.111 ************************************ 00:10:08.111 20:27:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:10:08.111 20:27:26 -- accel/accel.sh@16 -- # local accel_opc 00:10:08.111 20:27:26 -- accel/accel.sh@17 -- # local accel_module 00:10:08.111 20:27:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:08.111 20:27:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:08.111 20:27:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.111 20:27:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.111 20:27:26 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:08.111 20:27:26 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:08.111 20:27:26 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:08.111 20:27:26 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:08.111 20:27:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.111 20:27:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.111 20:27:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.111 20:27:26 -- accel/accel.sh@42 -- # jq -r . 00:10:08.111 [2024-04-26 20:27:26.406734] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:08.111 [2024-04-26 20:27:26.406859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383616 ] 00:10:08.372 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.372 [2024-04-26 20:27:26.522480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.372 [2024-04-26 20:27:26.617679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.372 [2024-04-26 20:27:26.622214] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:08.372 [2024-04-26 20:27:26.630195] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:18.363 20:27:36 -- accel/accel.sh@18 -- # out=' 00:10:18.363 SPDK Configuration: 00:10:18.363 Core mask: 0x1 00:10:18.363 00:10:18.363 Accel Perf Configuration: 00:10:18.363 Workload Type: dif_generate 00:10:18.363 Vector size: 4096 bytes 00:10:18.363 Transfer size: 4096 bytes 00:10:18.363 Block size: 512 bytes 00:10:18.363 Metadata size: 8 bytes 00:10:18.363 Vector count 1 00:10:18.363 Module: software 00:10:18.363 Queue depth: 32 00:10:18.363 Allocate depth: 32 00:10:18.363 # threads/core: 1 00:10:18.363 Run time: 1 seconds 00:10:18.363 Verify: No 00:10:18.363 00:10:18.363 Running for 1 seconds... 00:10:18.363 00:10:18.363 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:18.363 ------------------------------------------------------------------------------------ 00:10:18.363 0,0 151328/s 600 MiB/s 0 0 00:10:18.363 ==================================================================================== 00:10:18.363 Total 151328/s 591 MiB/s 0 0' 00:10:18.363 20:27:36 -- accel/accel.sh@20 -- # IFS=: 00:10:18.363 20:27:36 -- accel/accel.sh@20 -- # read -r var val 00:10:18.363 20:27:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:18.363 20:27:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:18.363 20:27:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.363 20:27:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.363 20:27:36 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:18.363 20:27:36 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:18.364 20:27:36 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:18.364 20:27:36 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:18.364 20:27:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.364 20:27:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.364 20:27:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.364 20:27:36 -- accel/accel.sh@42 -- # jq -r . 00:10:18.364 [2024-04-26 20:27:36.110866] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:18.364 [2024-04-26 20:27:36.110999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385649 ] 00:10:18.364 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.364 [2024-04-26 20:27:36.226349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.364 [2024-04-26 20:27:36.322203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.364 [2024-04-26 20:27:36.326770] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:18.364 [2024-04-26 20:27:36.334750] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val= 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val= 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val=0x1 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val= 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val= 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val=dif_generate 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val= 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val=software 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@23 -- # accel_module=software 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val=32 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val=32 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val=1 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val=No 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val= 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.941 20:27:42 -- accel/accel.sh@21 -- # val= 00:10:24.941 20:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.941 20:27:42 -- accel/accel.sh@20 -- # read -r var val 00:10:27.478 20:27:45 -- accel/accel.sh@21 -- # val= 00:10:27.478 20:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # IFS=: 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # read -r var val 00:10:27.478 20:27:45 -- accel/accel.sh@21 -- # val= 00:10:27.478 20:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # IFS=: 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # read -r var val 00:10:27.478 20:27:45 -- accel/accel.sh@21 -- # val= 00:10:27.478 20:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # IFS=: 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # read -r var val 00:10:27.478 20:27:45 -- accel/accel.sh@21 -- # val= 00:10:27.478 20:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # IFS=: 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # read -r var val 00:10:27.478 20:27:45 -- accel/accel.sh@21 -- # val= 00:10:27.478 20:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # IFS=: 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # read -r var val 00:10:27.478 20:27:45 -- accel/accel.sh@21 -- # val= 00:10:27.478 20:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # IFS=: 00:10:27.478 20:27:45 -- accel/accel.sh@20 -- # read -r var val 00:10:27.478 20:27:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:27.478 20:27:45 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:27.478 20:27:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:27.478 00:10:27.478 real 0m19.359s 00:10:27.478 user 0m6.547s 00:10:27.478 sys 0m0.460s 00:10:27.478 20:27:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.478 20:27:45 -- common/autotest_common.sh@10 -- # set +x 00:10:27.478 ************************************ 00:10:27.478 END TEST accel_dif_generate 00:10:27.478 ************************************ 00:10:27.478 20:27:45 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:27.478 20:27:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:27.478 20:27:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:27.478 20:27:45 -- common/autotest_common.sh@10 -- # set +x 00:10:27.478 ************************************ 00:10:27.478 START TEST accel_dif_generate_copy 00:10:27.478 ************************************ 00:10:27.478 20:27:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:27.478 20:27:45 -- accel/accel.sh@16 -- # local accel_opc 00:10:27.478 20:27:45 -- accel/accel.sh@17 -- # local accel_module 00:10:27.479 20:27:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:27.479 20:27:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:27.479 20:27:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.479 20:27:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.479 20:27:45 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:27.479 20:27:45 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:27.479 20:27:45 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:27.479 20:27:45 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:27.479 20:27:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.479 20:27:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.479 20:27:45 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.479 20:27:45 -- accel/accel.sh@42 -- # jq -r . 00:10:27.479 [2024-04-26 20:27:45.796503] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:27.479 [2024-04-26 20:27:45.796619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387553 ] 00:10:27.739 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.739 [2024-04-26 20:27:45.908763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.739 [2024-04-26 20:27:46.003738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.739 [2024-04-26 20:27:46.008254] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:27.739 [2024-04-26 20:27:46.016234] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:37.827 20:27:55 -- accel/accel.sh@18 -- # out=' 00:10:37.827 SPDK Configuration: 00:10:37.827 Core mask: 0x1 00:10:37.827 00:10:37.827 Accel Perf Configuration: 00:10:37.827 Workload Type: dif_generate_copy 00:10:37.827 Vector size: 4096 bytes 00:10:37.827 Transfer size: 4096 bytes 00:10:37.827 Vector count 1 00:10:37.827 Module: dsa 00:10:37.827 Queue depth: 32 00:10:37.827 Allocate depth: 32 00:10:37.827 # threads/core: 1 00:10:37.827 Run time: 1 seconds 00:10:37.827 Verify: No 00:10:37.827 00:10:37.827 Running for 1 seconds... 00:10:37.827 00:10:37.827 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:37.827 ------------------------------------------------------------------------------------ 00:10:37.827 0,0 334880/s 1328 MiB/s 0 0 00:10:37.827 ==================================================================================== 00:10:37.827 Total 334880/s 1308 MiB/s 0 0' 00:10:37.827 20:27:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.827 20:27:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.827 20:27:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:37.827 20:27:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:37.827 20:27:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.827 20:27:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.827 20:27:55 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:37.827 20:27:55 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:37.827 20:27:55 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:37.827 20:27:55 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:37.827 20:27:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.827 20:27:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.827 20:27:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.827 20:27:55 -- accel/accel.sh@42 -- # jq -r . 00:10:37.827 [2024-04-26 20:27:55.455974] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:37.827 [2024-04-26 20:27:55.456104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389392 ] 00:10:37.827 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.827 [2024-04-26 20:27:55.567576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.827 [2024-04-26 20:27:55.661574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.827 [2024-04-26 20:27:55.666070] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:37.827 [2024-04-26 20:27:55.674053] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:44.404 20:28:02 -- accel/accel.sh@21 -- # val= 00:10:44.404 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.404 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.404 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.404 20:28:02 -- accel/accel.sh@21 -- # val= 00:10:44.404 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.404 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.404 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.404 20:28:02 -- accel/accel.sh@21 -- # val=0x1 00:10:44.404 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.404 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.404 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.404 20:28:02 -- accel/accel.sh@21 -- # val= 00:10:44.404 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.404 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.404 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.404 20:28:02 -- accel/accel.sh@21 -- # val= 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val= 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val=dsa 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@23 -- # accel_module=dsa 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val=32 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val=32 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val=1 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val=No 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val= 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.405 20:28:02 -- accel/accel.sh@21 -- # val= 00:10:44.405 20:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.405 20:28:02 -- accel/accel.sh@20 -- # read -r var val 00:10:46.945 20:28:05 -- accel/accel.sh@21 -- # val= 00:10:46.945 20:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.945 20:28:05 -- accel/accel.sh@21 -- # val= 00:10:46.945 20:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.945 20:28:05 -- accel/accel.sh@21 -- # val= 00:10:46.945 20:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.945 20:28:05 -- accel/accel.sh@21 -- # val= 00:10:46.945 20:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.945 20:28:05 -- accel/accel.sh@21 -- # val= 00:10:46.945 20:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.945 20:28:05 -- accel/accel.sh@21 -- # val= 00:10:46.945 20:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.945 20:28:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.945 20:28:05 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:10:46.945 20:28:05 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:46.945 20:28:05 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:10:46.945 00:10:46.945 real 0m19.301s 00:10:46.945 user 0m6.509s 00:10:46.945 sys 0m0.459s 00:10:46.945 20:28:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.945 20:28:05 -- common/autotest_common.sh@10 -- # set +x 00:10:46.945 ************************************ 00:10:46.945 END TEST accel_dif_generate_copy 00:10:46.945 ************************************ 00:10:46.945 20:28:05 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:46.945 20:28:05 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:46.945 20:28:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:46.945 20:28:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.945 20:28:05 -- common/autotest_common.sh@10 -- # set +x 00:10:46.945 ************************************ 00:10:46.945 START TEST accel_comp 00:10:46.945 ************************************ 00:10:46.945 20:28:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:46.945 20:28:05 -- accel/accel.sh@16 -- # local accel_opc 00:10:46.945 20:28:05 -- accel/accel.sh@17 -- # local accel_module 00:10:46.945 20:28:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:46.945 20:28:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:46.945 20:28:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.946 20:28:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.946 20:28:05 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:46.946 20:28:05 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:46.946 20:28:05 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:46.946 20:28:05 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:46.946 20:28:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.946 20:28:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.946 20:28:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.946 20:28:05 -- accel/accel.sh@42 -- # jq -r . 00:10:46.946 [2024-04-26 20:28:05.144021] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:46.946 [2024-04-26 20:28:05.144157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391491 ] 00:10:46.946 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.946 [2024-04-26 20:28:05.273281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.207 [2024-04-26 20:28:05.367473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.207 [2024-04-26 20:28:05.372092] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:47.207 [2024-04-26 20:28:05.380057] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:57.201 20:28:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:57.201 00:10:57.201 SPDK Configuration: 00:10:57.201 Core mask: 0x1 00:10:57.201 00:10:57.201 Accel Perf Configuration: 00:10:57.201 Workload Type: compress 00:10:57.201 Transfer size: 4096 bytes 00:10:57.201 Vector count 1 00:10:57.201 Module: iaa 00:10:57.201 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:57.201 Queue depth: 32 00:10:57.201 Allocate depth: 32 00:10:57.201 # threads/core: 1 00:10:57.201 Run time: 1 seconds 00:10:57.201 Verify: No 00:10:57.201 00:10:57.201 Running for 1 seconds... 00:10:57.201 00:10:57.201 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:57.201 ------------------------------------------------------------------------------------ 00:10:57.201 0,0 284048/s 1183 MiB/s 0 0 00:10:57.201 ==================================================================================== 00:10:57.201 Total 284048/s 1109 MiB/s 0 0' 00:10:57.201 20:28:14 -- accel/accel.sh@20 -- # IFS=: 00:10:57.201 20:28:14 -- accel/accel.sh@20 -- # read -r var val 00:10:57.201 20:28:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:57.201 20:28:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:57.201 20:28:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.201 20:28:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.201 20:28:14 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:57.201 20:28:14 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:57.201 20:28:14 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:57.201 20:28:14 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:57.201 20:28:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.201 20:28:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.201 20:28:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.201 20:28:14 -- accel/accel.sh@42 -- # jq -r . 00:10:57.201 [2024-04-26 20:28:14.851548] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:57.201 [2024-04-26 20:28:14.851675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393456 ] 00:10:57.202 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.202 [2024-04-26 20:28:14.968627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.202 [2024-04-26 20:28:15.065591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.202 [2024-04-26 20:28:15.070135] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:57.202 [2024-04-26 20:28:15.078116] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val= 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val= 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val= 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val=0x1 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val= 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val= 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val=compress 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val= 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val=iaa 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val=32 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val=32 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val=1 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val=No 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val= 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.775 20:28:21 -- accel/accel.sh@21 -- # val= 00:11:03.775 20:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.775 20:28:21 -- accel/accel.sh@20 -- # read -r var val 00:11:06.311 20:28:24 -- accel/accel.sh@21 -- # val= 00:11:06.311 20:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # IFS=: 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # read -r var val 00:11:06.311 20:28:24 -- accel/accel.sh@21 -- # val= 00:11:06.311 20:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # IFS=: 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # read -r var val 00:11:06.311 20:28:24 -- accel/accel.sh@21 -- # val= 00:11:06.311 20:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # IFS=: 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # read -r var val 00:11:06.311 20:28:24 -- accel/accel.sh@21 -- # val= 00:11:06.311 20:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # IFS=: 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # read -r var val 00:11:06.311 20:28:24 -- accel/accel.sh@21 -- # val= 00:11:06.311 20:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # IFS=: 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # read -r var val 00:11:06.311 20:28:24 -- accel/accel.sh@21 -- # val= 00:11:06.311 20:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # IFS=: 00:11:06.311 20:28:24 -- accel/accel.sh@20 -- # read -r var val 00:11:06.311 20:28:24 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:06.311 20:28:24 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:06.311 20:28:24 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:06.311 00:11:06.311 real 0m19.404s 00:11:06.311 user 0m6.550s 00:11:06.311 sys 0m0.499s 00:11:06.311 20:28:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.311 20:28:24 -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 ************************************ 00:11:06.311 END TEST accel_comp 00:11:06.311 ************************************ 00:11:06.311 20:28:24 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:11:06.311 20:28:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:06.311 20:28:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.311 20:28:24 -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 ************************************ 00:11:06.311 START TEST accel_decomp 00:11:06.311 ************************************ 00:11:06.312 20:28:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:11:06.312 20:28:24 -- accel/accel.sh@16 -- # local accel_opc 00:11:06.312 20:28:24 -- accel/accel.sh@17 -- # local accel_module 00:11:06.312 20:28:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:11:06.312 20:28:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:11:06.312 20:28:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.312 20:28:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.312 20:28:24 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:06.312 20:28:24 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:06.312 20:28:24 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:06.312 20:28:24 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:06.312 20:28:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.312 20:28:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.312 20:28:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.312 20:28:24 -- accel/accel.sh@42 -- # jq -r . 00:11:06.312 [2024-04-26 20:28:24.577097] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:06.312 [2024-04-26 20:28:24.577217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3395426 ] 00:11:06.312 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.573 [2024-04-26 20:28:24.688970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.573 [2024-04-26 20:28:24.779693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.573 [2024-04-26 20:28:24.784173] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:06.573 [2024-04-26 20:28:24.792159] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:16.562 20:28:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:16.562 00:11:16.562 SPDK Configuration: 00:11:16.562 Core mask: 0x1 00:11:16.562 00:11:16.562 Accel Perf Configuration: 00:11:16.562 Workload Type: decompress 00:11:16.562 Transfer size: 4096 bytes 00:11:16.562 Vector count 1 00:11:16.562 Module: iaa 00:11:16.562 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:16.562 Queue depth: 32 00:11:16.562 Allocate depth: 32 00:11:16.562 # threads/core: 1 00:11:16.562 Run time: 1 seconds 00:11:16.562 Verify: Yes 00:11:16.562 00:11:16.562 Running for 1 seconds... 00:11:16.562 00:11:16.562 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:16.562 ------------------------------------------------------------------------------------ 00:11:16.562 0,0 270816/s 614 MiB/s 0 0 00:11:16.562 ==================================================================================== 00:11:16.562 Total 270816/s 1057 MiB/s 0 0' 00:11:16.562 20:28:34 -- accel/accel.sh@20 -- # IFS=: 00:11:16.562 20:28:34 -- accel/accel.sh@20 -- # read -r var val 00:11:16.562 20:28:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:11:16.562 20:28:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:11:16.562 20:28:34 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.562 20:28:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:16.562 20:28:34 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:16.562 20:28:34 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:16.562 20:28:34 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:16.562 20:28:34 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:16.562 20:28:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:16.562 20:28:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:16.562 20:28:34 -- accel/accel.sh@41 -- # local IFS=, 00:11:16.562 20:28:34 -- accel/accel.sh@42 -- # jq -r . 00:11:16.562 [2024-04-26 20:28:34.230292] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:16.562 [2024-04-26 20:28:34.230430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397246 ] 00:11:16.562 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.562 [2024-04-26 20:28:34.325970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.562 [2024-04-26 20:28:34.415023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.562 [2024-04-26 20:28:34.419569] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:16.562 [2024-04-26 20:28:34.427547] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val= 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val= 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val= 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val=0x1 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val= 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val= 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val=decompress 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val= 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val=iaa 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val=32 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val=32 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val=1 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val=Yes 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val= 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.143 20:28:40 -- accel/accel.sh@21 -- # val= 00:11:23.143 20:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:23.143 20:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:25.685 20:28:43 -- accel/accel.sh@21 -- # val= 00:11:25.685 20:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:25.685 20:28:43 -- accel/accel.sh@21 -- # val= 00:11:25.685 20:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:25.685 20:28:43 -- accel/accel.sh@21 -- # val= 00:11:25.685 20:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:25.685 20:28:43 -- accel/accel.sh@21 -- # val= 00:11:25.685 20:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:25.685 20:28:43 -- accel/accel.sh@21 -- # val= 00:11:25.685 20:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:25.685 20:28:43 -- accel/accel.sh@21 -- # val= 00:11:25.685 20:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:25.685 20:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:25.685 20:28:43 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:25.685 20:28:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:25.685 20:28:43 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:25.685 00:11:25.685 real 0m19.325s 00:11:25.685 user 0m6.552s 00:11:25.685 sys 0m0.419s 00:11:25.685 20:28:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.686 20:28:43 -- common/autotest_common.sh@10 -- # set +x 00:11:25.686 ************************************ 00:11:25.686 END TEST accel_decomp 00:11:25.686 ************************************ 00:11:25.686 20:28:43 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:25.686 20:28:43 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:25.686 20:28:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:25.686 20:28:43 -- common/autotest_common.sh@10 -- # set +x 00:11:25.686 ************************************ 00:11:25.686 START TEST accel_decmop_full 00:11:25.686 ************************************ 00:11:25.686 20:28:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:25.686 20:28:43 -- accel/accel.sh@16 -- # local accel_opc 00:11:25.686 20:28:43 -- accel/accel.sh@17 -- # local accel_module 00:11:25.686 20:28:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:25.686 20:28:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:25.686 20:28:43 -- accel/accel.sh@12 -- # build_accel_config 00:11:25.686 20:28:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:25.686 20:28:43 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:25.686 20:28:43 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:25.686 20:28:43 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:25.686 20:28:43 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:25.686 20:28:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:25.686 20:28:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:25.686 20:28:43 -- accel/accel.sh@41 -- # local IFS=, 00:11:25.686 20:28:43 -- accel/accel.sh@42 -- # jq -r . 00:11:25.686 [2024-04-26 20:28:43.932272] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:25.686 [2024-04-26 20:28:43.932394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399360 ] 00:11:25.686 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.945 [2024-04-26 20:28:44.044360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.945 [2024-04-26 20:28:44.133730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.945 [2024-04-26 20:28:44.138261] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:25.945 [2024-04-26 20:28:44.146241] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:35.933 20:28:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:35.933 00:11:35.933 SPDK Configuration: 00:11:35.933 Core mask: 0x1 00:11:35.933 00:11:35.933 Accel Perf Configuration: 00:11:35.933 Workload Type: decompress 00:11:35.933 Transfer size: 111250 bytes 00:11:35.933 Vector count 1 00:11:35.933 Module: iaa 00:11:35.933 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:35.933 Queue depth: 32 00:11:35.933 Allocate depth: 32 00:11:35.933 # threads/core: 1 00:11:35.933 Run time: 1 seconds 00:11:35.933 Verify: Yes 00:11:35.933 00:11:35.933 Running for 1 seconds... 00:11:35.933 00:11:35.933 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:35.933 ------------------------------------------------------------------------------------ 00:11:35.933 0,0 105073/s 5923 MiB/s 0 0 00:11:35.933 ==================================================================================== 00:11:35.933 Total 105073/s 11147 MiB/s 0 0' 00:11:35.933 20:28:53 -- accel/accel.sh@20 -- # IFS=: 00:11:35.933 20:28:53 -- accel/accel.sh@20 -- # read -r var val 00:11:35.933 20:28:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:35.933 20:28:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:35.933 20:28:53 -- accel/accel.sh@12 -- # build_accel_config 00:11:35.933 20:28:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:35.933 20:28:53 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:35.933 20:28:53 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:35.933 20:28:53 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:35.933 20:28:53 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:35.933 20:28:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:35.933 20:28:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:35.933 20:28:53 -- accel/accel.sh@41 -- # local IFS=, 00:11:35.933 20:28:53 -- accel/accel.sh@42 -- # jq -r . 00:11:35.933 [2024-04-26 20:28:53.594058] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:35.933 [2024-04-26 20:28:53.594188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401180 ] 00:11:35.933 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.933 [2024-04-26 20:28:53.709115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.933 [2024-04-26 20:28:53.797292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.933 [2024-04-26 20:28:53.801840] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:35.933 [2024-04-26 20:28:53.809822] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val= 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val= 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val= 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val=0x1 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val= 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val= 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val=decompress 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val= 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val=iaa 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val=32 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val=32 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val=1 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val=Yes 00:11:42.577 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.577 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.577 20:29:00 -- accel/accel.sh@21 -- # val= 00:11:42.578 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.578 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.578 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:42.578 20:29:00 -- accel/accel.sh@21 -- # val= 00:11:42.578 20:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.578 20:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:42.578 20:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:45.129 20:29:03 -- accel/accel.sh@21 -- # val= 00:11:45.129 20:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.129 20:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:45.129 20:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:45.130 20:29:03 -- accel/accel.sh@21 -- # val= 00:11:45.130 20:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:45.130 20:29:03 -- accel/accel.sh@21 -- # val= 00:11:45.130 20:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:45.130 20:29:03 -- accel/accel.sh@21 -- # val= 00:11:45.130 20:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:45.130 20:29:03 -- accel/accel.sh@21 -- # val= 00:11:45.130 20:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:45.130 20:29:03 -- accel/accel.sh@21 -- # val= 00:11:45.130 20:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:45.130 20:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:45.130 20:29:03 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:45.130 20:29:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:45.130 20:29:03 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:45.130 00:11:45.130 real 0m19.369s 00:11:45.130 user 0m6.568s 00:11:45.130 sys 0m0.445s 00:11:45.130 20:29:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.130 20:29:03 -- common/autotest_common.sh@10 -- # set +x 00:11:45.130 ************************************ 00:11:45.130 END TEST accel_decmop_full 00:11:45.130 ************************************ 00:11:45.130 20:29:03 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:45.130 20:29:03 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:45.130 20:29:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.130 20:29:03 -- common/autotest_common.sh@10 -- # set +x 00:11:45.130 ************************************ 00:11:45.130 START TEST accel_decomp_mcore 00:11:45.130 ************************************ 00:11:45.130 20:29:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:45.130 20:29:03 -- accel/accel.sh@16 -- # local accel_opc 00:11:45.130 20:29:03 -- accel/accel.sh@17 -- # local accel_module 00:11:45.130 20:29:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:45.130 20:29:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:45.130 20:29:03 -- accel/accel.sh@12 -- # build_accel_config 00:11:45.130 20:29:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:45.130 20:29:03 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:45.130 20:29:03 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:45.130 20:29:03 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:45.130 20:29:03 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:45.130 20:29:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:45.130 20:29:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:45.130 20:29:03 -- accel/accel.sh@41 -- # local IFS=, 00:11:45.130 20:29:03 -- accel/accel.sh@42 -- # jq -r . 00:11:45.130 [2024-04-26 20:29:03.339884] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:45.130 [2024-04-26 20:29:03.340009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403296 ] 00:11:45.130 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.130 [2024-04-26 20:29:03.468107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.397 [2024-04-26 20:29:03.560177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.397 [2024-04-26 20:29:03.560274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.397 [2024-04-26 20:29:03.560388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.397 [2024-04-26 20:29:03.560406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.397 [2024-04-26 20:29:03.565022] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:45.397 [2024-04-26 20:29:03.572991] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:55.388 20:29:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:55.388 00:11:55.388 SPDK Configuration: 00:11:55.388 Core mask: 0xf 00:11:55.388 00:11:55.388 Accel Perf Configuration: 00:11:55.388 Workload Type: decompress 00:11:55.388 Transfer size: 4096 bytes 00:11:55.388 Vector count 1 00:11:55.388 Module: iaa 00:11:55.388 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:55.388 Queue depth: 32 00:11:55.388 Allocate depth: 32 00:11:55.388 # threads/core: 1 00:11:55.388 Run time: 1 seconds 00:11:55.388 Verify: Yes 00:11:55.388 00:11:55.388 Running for 1 seconds... 00:11:55.388 00:11:55.388 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:55.388 ------------------------------------------------------------------------------------ 00:11:55.388 0,0 114624/s 260 MiB/s 0 0 00:11:55.388 3,0 114768/s 260 MiB/s 0 0 00:11:55.388 2,0 115024/s 260 MiB/s 0 0 00:11:55.388 1,0 115488/s 262 MiB/s 0 0 00:11:55.388 ==================================================================================== 00:11:55.388 Total 459904/s 1796 MiB/s 0 0' 00:11:55.388 20:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:55.388 20:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:55.388 20:29:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:55.388 20:29:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:55.388 20:29:13 -- accel/accel.sh@12 -- # build_accel_config 00:11:55.388 20:29:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:55.388 20:29:13 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:55.388 20:29:13 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:55.388 20:29:13 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:55.388 20:29:13 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:55.388 20:29:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:55.388 20:29:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:55.388 20:29:13 -- accel/accel.sh@41 -- # local IFS=, 00:11:55.388 20:29:13 -- accel/accel.sh@42 -- # jq -r . 00:11:55.388 [2024-04-26 20:29:13.062184] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:55.388 [2024-04-26 20:29:13.062316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405118 ] 00:11:55.388 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.388 [2024-04-26 20:29:13.178532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.388 [2024-04-26 20:29:13.272983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.388 [2024-04-26 20:29:13.273017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.388 [2024-04-26 20:29:13.273054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.388 [2024-04-26 20:29:13.273042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.388 [2024-04-26 20:29:13.277704] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:55.388 [2024-04-26 20:29:13.285696] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val= 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val= 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val= 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val=0xf 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val= 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val= 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val=decompress 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val= 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val=iaa 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val=32 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val=32 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val=1 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val=Yes 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val= 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:01.976 20:29:19 -- accel/accel.sh@21 -- # val= 00:12:01.976 20:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # IFS=: 00:12:01.976 20:29:19 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@21 -- # val= 00:12:04.517 20:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # IFS=: 00:12:04.517 20:29:22 -- accel/accel.sh@20 -- # read -r var val 00:12:04.517 20:29:22 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:04.517 20:29:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:04.517 20:29:22 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:04.517 00:12:04.517 real 0m19.439s 00:12:04.517 user 1m2.185s 00:12:04.517 sys 0m0.509s 00:12:04.517 20:29:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.517 20:29:22 -- common/autotest_common.sh@10 -- # set +x 00:12:04.517 ************************************ 00:12:04.517 END TEST accel_decomp_mcore 00:12:04.517 ************************************ 00:12:04.517 20:29:22 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:04.517 20:29:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:04.517 20:29:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:04.517 20:29:22 -- common/autotest_common.sh@10 -- # set +x 00:12:04.517 ************************************ 00:12:04.517 START TEST accel_decomp_full_mcore 00:12:04.517 ************************************ 00:12:04.517 20:29:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:04.517 20:29:22 -- accel/accel.sh@16 -- # local accel_opc 00:12:04.517 20:29:22 -- accel/accel.sh@17 -- # local accel_module 00:12:04.517 20:29:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:04.517 20:29:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:04.517 20:29:22 -- accel/accel.sh@12 -- # build_accel_config 00:12:04.517 20:29:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:04.517 20:29:22 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:04.517 20:29:22 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:04.517 20:29:22 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:04.517 20:29:22 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:04.517 20:29:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:04.517 20:29:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:04.517 20:29:22 -- accel/accel.sh@41 -- # local IFS=, 00:12:04.517 20:29:22 -- accel/accel.sh@42 -- # jq -r . 00:12:04.517 [2024-04-26 20:29:22.798273] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:04.517 [2024-04-26 20:29:22.798354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407045 ] 00:12:04.517 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.775 [2024-04-26 20:29:22.883633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.775 [2024-04-26 20:29:22.976618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.775 [2024-04-26 20:29:22.976724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.775 [2024-04-26 20:29:22.976852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.775 [2024-04-26 20:29:22.976863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.775 [2024-04-26 20:29:22.981369] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:04.775 [2024-04-26 20:29:22.989362] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:14.883 20:29:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:14.883 00:12:14.883 SPDK Configuration: 00:12:14.883 Core mask: 0xf 00:12:14.883 00:12:14.883 Accel Perf Configuration: 00:12:14.883 Workload Type: decompress 00:12:14.883 Transfer size: 111250 bytes 00:12:14.883 Vector count 1 00:12:14.883 Module: iaa 00:12:14.883 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:14.883 Queue depth: 32 00:12:14.883 Allocate depth: 32 00:12:14.883 # threads/core: 1 00:12:14.883 Run time: 1 seconds 00:12:14.883 Verify: Yes 00:12:14.883 00:12:14.883 Running for 1 seconds... 00:12:14.883 00:12:14.883 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:14.883 ------------------------------------------------------------------------------------ 00:12:14.883 0,0 86272/s 4863 MiB/s 0 0 00:12:14.883 3,0 83845/s 4726 MiB/s 0 0 00:12:14.883 2,0 87505/s 4933 MiB/s 0 0 00:12:14.883 1,0 86608/s 4882 MiB/s 0 0 00:12:14.883 ==================================================================================== 00:12:14.883 Total 344230/s 36521 MiB/s 0 0' 00:12:14.883 20:29:32 -- accel/accel.sh@20 -- # IFS=: 00:12:14.883 20:29:32 -- accel/accel.sh@20 -- # read -r var val 00:12:14.883 20:29:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:14.883 20:29:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:14.883 20:29:32 -- accel/accel.sh@12 -- # build_accel_config 00:12:14.883 20:29:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:14.883 20:29:32 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:14.883 20:29:32 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:14.883 20:29:32 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:14.883 20:29:32 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:14.883 20:29:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:14.883 20:29:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:14.883 20:29:32 -- accel/accel.sh@41 -- # local IFS=, 00:12:14.883 20:29:32 -- accel/accel.sh@42 -- # jq -r . 00:12:14.883 [2024-04-26 20:29:32.463366] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:14.883 [2024-04-26 20:29:32.463490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409053 ] 00:12:14.883 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.883 [2024-04-26 20:29:32.575867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.883 [2024-04-26 20:29:32.667219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.883 [2024-04-26 20:29:32.667320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.883 [2024-04-26 20:29:32.667423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.883 [2024-04-26 20:29:32.667435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.883 [2024-04-26 20:29:32.672001] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:14.883 [2024-04-26 20:29:32.679988] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val= 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val= 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val= 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val=0xf 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val= 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val= 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val=decompress 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val= 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val=iaa 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val=32 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val=32 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val=1 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val=Yes 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val= 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:21.459 20:29:39 -- accel/accel.sh@21 -- # val= 00:12:21.459 20:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # IFS=: 00:12:21.459 20:29:39 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@21 -- # val= 00:12:24.001 20:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # IFS=: 00:12:24.001 20:29:42 -- accel/accel.sh@20 -- # read -r var val 00:12:24.001 20:29:42 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:24.001 20:29:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:24.001 20:29:42 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:24.001 00:12:24.001 real 0m19.381s 00:12:24.001 user 1m2.241s 00:12:24.001 sys 0m0.443s 00:12:24.001 20:29:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.001 20:29:42 -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 ************************************ 00:12:24.001 END TEST accel_decomp_full_mcore 00:12:24.001 ************************************ 00:12:24.001 20:29:42 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:24.001 20:29:42 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:24.001 20:29:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.001 20:29:42 -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 ************************************ 00:12:24.001 START TEST accel_decomp_mthread 00:12:24.001 ************************************ 00:12:24.001 20:29:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:24.001 20:29:42 -- accel/accel.sh@16 -- # local accel_opc 00:12:24.001 20:29:42 -- accel/accel.sh@17 -- # local accel_module 00:12:24.001 20:29:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:24.001 20:29:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:24.001 20:29:42 -- accel/accel.sh@12 -- # build_accel_config 00:12:24.001 20:29:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:24.001 20:29:42 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:24.001 20:29:42 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:24.001 20:29:42 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:24.001 20:29:42 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:24.001 20:29:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:24.001 20:29:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:24.001 20:29:42 -- accel/accel.sh@41 -- # local IFS=, 00:12:24.001 20:29:42 -- accel/accel.sh@42 -- # jq -r . 00:12:24.001 [2024-04-26 20:29:42.233309] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:24.001 [2024-04-26 20:29:42.233451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410890 ] 00:12:24.001 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.260 [2024-04-26 20:29:42.363323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.260 [2024-04-26 20:29:42.457018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.260 [2024-04-26 20:29:42.461608] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:24.260 [2024-04-26 20:29:42.469584] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:34.250 20:29:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:34.250 00:12:34.250 SPDK Configuration: 00:12:34.250 Core mask: 0x1 00:12:34.250 00:12:34.250 Accel Perf Configuration: 00:12:34.250 Workload Type: decompress 00:12:34.250 Transfer size: 4096 bytes 00:12:34.250 Vector count 1 00:12:34.250 Module: iaa 00:12:34.250 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:34.250 Queue depth: 32 00:12:34.250 Allocate depth: 32 00:12:34.250 # threads/core: 2 00:12:34.250 Run time: 1 seconds 00:12:34.250 Verify: Yes 00:12:34.250 00:12:34.250 Running for 1 seconds... 00:12:34.250 00:12:34.250 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:34.250 ------------------------------------------------------------------------------------ 00:12:34.250 0,1 149536/s 339 MiB/s 0 0 00:12:34.250 0,0 148096/s 335 MiB/s 0 0 00:12:34.250 ==================================================================================== 00:12:34.250 Total 297632/s 1162 MiB/s 0 0' 00:12:34.250 20:29:51 -- accel/accel.sh@20 -- # IFS=: 00:12:34.250 20:29:51 -- accel/accel.sh@20 -- # read -r var val 00:12:34.250 20:29:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:34.250 20:29:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:34.250 20:29:51 -- accel/accel.sh@12 -- # build_accel_config 00:12:34.250 20:29:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:34.250 20:29:51 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:34.250 20:29:51 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:34.250 20:29:51 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:34.250 20:29:51 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:34.250 20:29:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:34.250 20:29:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:34.250 20:29:51 -- accel/accel.sh@41 -- # local IFS=, 00:12:34.250 20:29:51 -- accel/accel.sh@42 -- # jq -r . 00:12:34.250 [2024-04-26 20:29:51.957395] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:34.250 [2024-04-26 20:29:51.957511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412986 ] 00:12:34.250 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.250 [2024-04-26 20:29:52.067182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.250 [2024-04-26 20:29:52.157067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.250 [2024-04-26 20:29:52.161550] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:34.250 [2024-04-26 20:29:52.169537] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val= 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val= 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val= 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val=0x1 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val= 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val= 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val=decompress 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val= 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val=iaa 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val=32 00:12:40.830 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.830 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.830 20:29:58 -- accel/accel.sh@21 -- # val=32 00:12:40.831 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.831 20:29:58 -- accel/accel.sh@21 -- # val=2 00:12:40.831 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.831 20:29:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:40.831 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.831 20:29:58 -- accel/accel.sh@21 -- # val=Yes 00:12:40.831 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.831 20:29:58 -- accel/accel.sh@21 -- # val= 00:12:40.831 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:40.831 20:29:58 -- accel/accel.sh@21 -- # val= 00:12:40.831 20:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # IFS=: 00:12:40.831 20:29:58 -- accel/accel.sh@20 -- # read -r var val 00:12:43.410 20:30:01 -- accel/accel.sh@21 -- # val= 00:12:43.410 20:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # IFS=: 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # read -r var val 00:12:43.410 20:30:01 -- accel/accel.sh@21 -- # val= 00:12:43.410 20:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # IFS=: 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # read -r var val 00:12:43.410 20:30:01 -- accel/accel.sh@21 -- # val= 00:12:43.410 20:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # IFS=: 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # read -r var val 00:12:43.410 20:30:01 -- accel/accel.sh@21 -- # val= 00:12:43.410 20:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # IFS=: 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # read -r var val 00:12:43.410 20:30:01 -- accel/accel.sh@21 -- # val= 00:12:43.410 20:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # IFS=: 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # read -r var val 00:12:43.410 20:30:01 -- accel/accel.sh@21 -- # val= 00:12:43.410 20:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # IFS=: 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # read -r var val 00:12:43.410 20:30:01 -- accel/accel.sh@21 -- # val= 00:12:43.410 20:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # IFS=: 00:12:43.410 20:30:01 -- accel/accel.sh@20 -- # read -r var val 00:12:43.410 20:30:01 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:43.410 20:30:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:43.410 20:30:01 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:43.410 00:12:43.410 real 0m19.385s 00:12:43.410 user 0m6.546s 00:12:43.410 sys 0m0.488s 00:12:43.410 20:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.410 20:30:01 -- common/autotest_common.sh@10 -- # set +x 00:12:43.410 ************************************ 00:12:43.410 END TEST accel_decomp_mthread 00:12:43.410 ************************************ 00:12:43.410 20:30:01 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:43.410 20:30:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:43.410 20:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:43.410 20:30:01 -- common/autotest_common.sh@10 -- # set +x 00:12:43.410 ************************************ 00:12:43.410 START TEST accel_deomp_full_mthread 00:12:43.410 ************************************ 00:12:43.410 20:30:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:43.410 20:30:01 -- accel/accel.sh@16 -- # local accel_opc 00:12:43.410 20:30:01 -- accel/accel.sh@17 -- # local accel_module 00:12:43.410 20:30:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:43.410 20:30:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:43.410 20:30:01 -- accel/accel.sh@12 -- # build_accel_config 00:12:43.410 20:30:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:43.410 20:30:01 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:43.410 20:30:01 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:43.410 20:30:01 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:43.410 20:30:01 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:43.410 20:30:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:43.410 20:30:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:43.410 20:30:01 -- accel/accel.sh@41 -- # local IFS=, 00:12:43.410 20:30:01 -- accel/accel.sh@42 -- # jq -r . 00:12:43.410 [2024-04-26 20:30:01.645369] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:43.410 [2024-04-26 20:30:01.645507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414930 ] 00:12:43.410 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.730 [2024-04-26 20:30:01.761406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.730 [2024-04-26 20:30:01.857834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.730 [2024-04-26 20:30:01.862375] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:43.730 [2024-04-26 20:30:01.870355] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:53.730 20:30:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:53.730 00:12:53.730 SPDK Configuration: 00:12:53.730 Core mask: 0x1 00:12:53.730 00:12:53.730 Accel Perf Configuration: 00:12:53.730 Workload Type: decompress 00:12:53.730 Transfer size: 111250 bytes 00:12:53.730 Vector count 1 00:12:53.730 Module: iaa 00:12:53.730 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:53.730 Queue depth: 32 00:12:53.730 Allocate depth: 32 00:12:53.730 # threads/core: 2 00:12:53.730 Run time: 1 seconds 00:12:53.730 Verify: Yes 00:12:53.730 00:12:53.730 Running for 1 seconds... 00:12:53.730 00:12:53.730 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:53.730 ------------------------------------------------------------------------------------ 00:12:53.730 0,1 61344/s 3458 MiB/s 0 0 00:12:53.730 0,0 60784/s 3426 MiB/s 0 0 00:12:53.730 ==================================================================================== 00:12:53.730 Total 122128/s 12957 MiB/s 0 0' 00:12:53.730 20:30:11 -- accel/accel.sh@20 -- # IFS=: 00:12:53.730 20:30:11 -- accel/accel.sh@20 -- # read -r var val 00:12:53.730 20:30:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:53.730 20:30:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:53.730 20:30:11 -- accel/accel.sh@12 -- # build_accel_config 00:12:53.730 20:30:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:53.730 20:30:11 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:53.730 20:30:11 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:53.730 20:30:11 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:53.730 20:30:11 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:53.730 20:30:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:53.730 20:30:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:53.730 20:30:11 -- accel/accel.sh@41 -- # local IFS=, 00:12:53.730 20:30:11 -- accel/accel.sh@42 -- # jq -r . 00:12:53.730 [2024-04-26 20:30:11.365599] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:53.730 [2024-04-26 20:30:11.365729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417189 ] 00:12:53.730 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.730 [2024-04-26 20:30:11.481904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.730 [2024-04-26 20:30:11.573172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.730 [2024-04-26 20:30:11.577725] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:53.730 [2024-04-26 20:30:11.585707] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val= 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val= 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val= 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val=0x1 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val= 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val= 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val=decompress 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val= 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val=iaa 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@23 -- # accel_module=iaa 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val=32 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val=32 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val=2 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val=Yes 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val= 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:00.321 20:30:17 -- accel/accel.sh@21 -- # val= 00:13:00.321 20:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # IFS=: 00:13:00.321 20:30:17 -- accel/accel.sh@20 -- # read -r var val 00:13:02.860 20:30:21 -- accel/accel.sh@21 -- # val= 00:13:02.860 20:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # IFS=: 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # read -r var val 00:13:02.860 20:30:21 -- accel/accel.sh@21 -- # val= 00:13:02.860 20:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # IFS=: 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # read -r var val 00:13:02.860 20:30:21 -- accel/accel.sh@21 -- # val= 00:13:02.860 20:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # IFS=: 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # read -r var val 00:13:02.860 20:30:21 -- accel/accel.sh@21 -- # val= 00:13:02.860 20:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # IFS=: 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # read -r var val 00:13:02.860 20:30:21 -- accel/accel.sh@21 -- # val= 00:13:02.860 20:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # IFS=: 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # read -r var val 00:13:02.860 20:30:21 -- accel/accel.sh@21 -- # val= 00:13:02.860 20:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # IFS=: 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # read -r var val 00:13:02.860 20:30:21 -- accel/accel.sh@21 -- # val= 00:13:02.860 20:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # IFS=: 00:13:02.860 20:30:21 -- accel/accel.sh@20 -- # read -r var val 00:13:02.860 20:30:21 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:13:02.860 20:30:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:13:02.860 20:30:21 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:13:02.860 00:13:02.860 real 0m19.418s 00:13:02.860 user 0m6.569s 00:13:02.860 sys 0m0.474s 00:13:02.860 20:30:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.860 20:30:21 -- common/autotest_common.sh@10 -- # set +x 00:13:02.860 ************************************ 00:13:02.860 END TEST accel_deomp_full_mthread 00:13:02.860 ************************************ 00:13:02.860 20:30:21 -- accel/accel.sh@116 -- # [[ n == y ]] 00:13:02.860 20:30:21 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:02.860 20:30:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:02.860 20:30:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:02.860 20:30:21 -- common/autotest_common.sh@10 -- # set +x 00:13:02.860 20:30:21 -- accel/accel.sh@129 -- # build_accel_config 00:13:02.860 20:30:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:02.860 20:30:21 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:13:02.860 20:30:21 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:13:02.860 20:30:21 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:02.860 20:30:21 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:13:02.860 20:30:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:02.860 20:30:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:02.860 20:30:21 -- accel/accel.sh@41 -- # local IFS=, 00:13:02.860 20:30:21 -- accel/accel.sh@42 -- # jq -r . 00:13:02.860 ************************************ 00:13:02.860 START TEST accel_dif_functional_tests 00:13:02.860 ************************************ 00:13:02.860 20:30:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:02.860 [2024-04-26 20:30:21.104074] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:02.860 [2024-04-26 20:30:21.104167] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419273 ] 00:13:02.860 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.860 [2024-04-26 20:30:21.192558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.121 [2024-04-26 20:30:21.285085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.121 [2024-04-26 20:30:21.285180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.121 [2024-04-26 20:30:21.285187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.121 [2024-04-26 20:30:21.289974] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:13:03.121 [2024-04-26 20:30:21.297964] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:13:11.249 00:13:11.249 00:13:11.249 CUnit - A unit testing framework for C - Version 2.1-3 00:13:11.249 http://cunit.sourceforge.net/ 00:13:11.249 00:13:11.249 00:13:11.249 Suite: accel_dif 00:13:11.249 Test: verify: DIF generated, GUARD check ...passed 00:13:11.249 Test: verify: DIF generated, APPTAG check ...passed 00:13:11.249 Test: verify: DIF generated, REFTAG check ...passed 00:13:11.249 Test: verify: DIF not generated, GUARD check ...[2024-04-26 20:30:29.215831] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:13:11.249 [2024-04-26 20:30:29.215870] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-26 20:30:29.215881] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.249 [2024-04-26 20:30:29.215890] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.249 [2024-04-26 20:30:29.215897] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.249 [2024-04-26 20:30:29.215904] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.249 [2024-04-26 20:30:29.215910] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:13:11.249 [2024-04-26 20:30:29.215918] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:13:11.249 [2024-04-26 20:30:29.215925] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:13:11.249 [2024-04-26 20:30:29.215950] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:11.249 [2024-04-26 20:30:29.215958] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:13:11.249 [2024-04-26 20:30:29.215982] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:11.249 passed 00:13:11.249 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 20:30:29.216047] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:13:11.249 [2024-04-26 20:30:29.216057] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-26 20:30:29.216067] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.249 [2024-04-26 20:30:29.216074] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.249 [2024-04-26 20:30:29.216082] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.249 [2024-04-26 20:30:29.216089] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.249 [2024-04-26 20:30:29.216097] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:13:11.249 [2024-04-26 20:30:29.216104] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:13:11.249 [2024-04-26 20:30:29.216111] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:13:11.249 [2024-04-26 20:30:29.216119] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:11.249 [2024-04-26 20:30:29.216128] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:13:11.249 [2024-04-26 20:30:29.216148] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:11.249 passed 00:13:11.249 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 20:30:29.216183] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:13:11.250 [2024-04-26 20:30:29.216195] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-26 20:30:29.216201] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216210] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216216] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216223] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216230] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:13:11.250 [2024-04-26 20:30:29.216243] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:13:11.250 [2024-04-26 20:30:29.216253] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:13:11.250 [2024-04-26 20:30:29.216264] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:11.250 [2024-04-26 20:30:29.216271] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:13:11.250 [2024-04-26 20:30:29.216293] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:11.250 passed 00:13:11.250 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:11.250 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 20:30:29.216369] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:13:11.250 [2024-04-26 20:30:29.216383] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-26 20:30:29.216392] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216398] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216406] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216412] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216419] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:13:11.250 [2024-04-26 20:30:29.216425] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:13:11.250 [2024-04-26 20:30:29.216435] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:13:11.250 [2024-04-26 20:30:29.216444] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:11.250 [2024-04-26 20:30:29.216453] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:13:11.250 passed 00:13:11.250 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:11.250 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:11.250 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:11.250 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 20:30:29.216633] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:13:11.250 [2024-04-26 20:30:29.216644] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-26 20:30:29.216650] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216658] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216664] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216671] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216681] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:13:11.250 [2024-04-26 20:30:29.216689] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:13:11.250 [2024-04-26 20:30:29.216695] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:13:11.250 [2024-04-26 20:30:29.216703] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:13:11.250 [2024-04-26 20:30:29.216708] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-26 20:30:29.216716] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216722] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216730] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216737] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216745] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:13:11.250 [2024-04-26 20:30:29.216751] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:13:11.250 [2024-04-26 20:30:29.216761] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:13:11.250 [2024-04-26 20:30:29.216768] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:11.250 [2024-04-26 20:30:29.216777] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:13:11.250 [2024-04-26 20:30:29.216785] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:13:11.250 passed[2024-04-26 20:30:29.216794] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw: 00:13:11.250 Test: generate copy: DIF generated, GUARD check ...[2024-04-26 20:30:29.216802] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216814] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216821] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216829] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:13:11.250 [2024-04-26 20:30:29.216835] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:13:11.250 [2024-04-26 20:30:29.216843] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:13:11.250 [2024-04-26 20:30:29.216849] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:13:11.250 passed 00:13:11.250 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:11.250 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:11.250 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-04-26 20:30:29.216992] idxd.c:1571:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:13:11.250 passed 00:13:11.250 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-04-26 20:30:29.217030] idxd.c:1576:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:13:11.250 passed 00:13:11.250 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-04-26 20:30:29.217068] idxd.c:1581:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:13:11.250 passed 00:13:11.250 Test: generate copy: iovecs-len validate ...[2024-04-26 20:30:29.217105] idxd.c:1608:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:13:11.250 passed 00:13:11.250 Test: generate copy: buffer alignment validate ...passed 00:13:11.250 00:13:11.250 Run Summary: Type Total Ran Passed Failed Inactive 00:13:11.250 suites 1 1 n/a 0 0 00:13:11.250 tests 20 20 20 0 0 00:13:11.250 asserts 204 204 204 0 n/a 00:13:11.250 00:13:11.250 Elapsed time = 0.003 seconds 00:13:13.789 00:13:13.789 real 0m11.008s 00:13:13.789 user 0m22.090s 00:13:13.789 sys 0m0.229s 00:13:13.789 20:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.789 20:30:32 -- common/autotest_common.sh@10 -- # set +x 00:13:13.789 ************************************ 00:13:13.789 END TEST accel_dif_functional_tests 00:13:13.789 ************************************ 00:13:13.789 00:13:13.789 real 7m8.860s 00:13:13.789 user 4m34.120s 00:13:13.789 sys 0m11.881s 00:13:13.789 20:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.789 20:30:32 -- common/autotest_common.sh@10 -- # set +x 00:13:13.789 ************************************ 00:13:13.789 END TEST accel 00:13:13.789 ************************************ 00:13:13.789 20:30:32 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:13.789 20:30:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:13.789 20:30:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.789 20:30:32 -- common/autotest_common.sh@10 -- # set +x 00:13:14.050 ************************************ 00:13:14.050 START TEST accel_rpc 00:13:14.050 ************************************ 00:13:14.050 20:30:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:14.050 * Looking for test storage... 00:13:14.050 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:13:14.050 20:30:32 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:14.050 20:30:32 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3421462 00:13:14.050 20:30:32 -- accel/accel_rpc.sh@15 -- # waitforlisten 3421462 00:13:14.050 20:30:32 -- common/autotest_common.sh@819 -- # '[' -z 3421462 ']' 00:13:14.050 20:30:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.050 20:30:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:14.050 20:30:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.050 20:30:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:14.050 20:30:32 -- common/autotest_common.sh@10 -- # set +x 00:13:14.050 20:30:32 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:14.050 [2024-04-26 20:30:32.303166] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:14.050 [2024-04-26 20:30:32.303314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421462 ] 00:13:14.050 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.312 [2024-04-26 20:30:32.435940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.312 [2024-04-26 20:30:32.528655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.312 [2024-04-26 20:30:32.528865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.885 20:30:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:14.885 20:30:33 -- common/autotest_common.sh@852 -- # return 0 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:13:14.885 20:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:14.885 20:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:14.885 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.885 ************************************ 00:13:14.885 START TEST accel_scan_dsa_modules 00:13:14.885 ************************************ 00:13:14.885 20:30:33 -- common/autotest_common.sh@1104 -- # accel_scan_dsa_modules_test_suite 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:13:14.885 20:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.885 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.885 [2024-04-26 20:30:33.013364] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:13:14.885 20:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:13:14.885 20:30:33 -- common/autotest_common.sh@640 -- # local es=0 00:13:14.885 20:30:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:13:14.885 20:30:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:13:14.885 20:30:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.885 20:30:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:13:14.885 20:30:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.885 20:30:33 -- common/autotest_common.sh@643 -- # rpc_cmd dsa_scan_accel_module 00:13:14.885 20:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.885 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.885 request: 00:13:14.885 { 00:13:14.885 "method": "dsa_scan_accel_module", 00:13:14.885 "req_id": 1 00:13:14.885 } 00:13:14.885 Got JSON-RPC error response 00:13:14.885 response: 00:13:14.885 { 00:13:14.885 "code": -114, 00:13:14.885 "message": "Operation already in progress" 00:13:14.885 } 00:13:14.885 20:30:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:14.885 20:30:33 -- common/autotest_common.sh@643 -- # es=1 00:13:14.885 20:30:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:14.885 20:30:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:14.885 20:30:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:14.885 00:13:14.885 real 0m0.026s 00:13:14.885 user 0m0.008s 00:13:14.885 sys 0m0.001s 00:13:14.885 20:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.885 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.885 ************************************ 00:13:14.885 END TEST accel_scan_dsa_modules 00:13:14.885 ************************************ 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:13:14.885 20:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:14.885 20:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:14.885 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.885 ************************************ 00:13:14.885 START TEST accel_scan_iaa_modules 00:13:14.885 ************************************ 00:13:14.885 20:30:33 -- common/autotest_common.sh@1104 -- # accel_scan_iaa_modules_test_suite 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:13:14.885 20:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.885 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.885 [2024-04-26 20:30:33.085360] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:13:14.885 20:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.885 20:30:33 -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:13:14.885 20:30:33 -- common/autotest_common.sh@640 -- # local es=0 00:13:14.885 20:30:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:13:14.885 20:30:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:13:14.885 20:30:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.885 20:30:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:13:14.885 20:30:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.885 20:30:33 -- common/autotest_common.sh@643 -- # rpc_cmd iaa_scan_accel_module 00:13:14.885 20:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.885 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.885 request: 00:13:14.885 { 00:13:14.886 "method": "iaa_scan_accel_module", 00:13:14.886 "req_id": 1 00:13:14.886 } 00:13:14.886 Got JSON-RPC error response 00:13:14.886 response: 00:13:14.886 { 00:13:14.886 "code": -114, 00:13:14.886 "message": "Operation already in progress" 00:13:14.886 } 00:13:14.886 20:30:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:14.886 20:30:33 -- common/autotest_common.sh@643 -- # es=1 00:13:14.886 20:30:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:14.886 20:30:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:14.886 20:30:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:14.886 00:13:14.886 real 0m0.023s 00:13:14.886 user 0m0.006s 00:13:14.886 sys 0m0.001s 00:13:14.886 20:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.886 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.886 ************************************ 00:13:14.886 END TEST accel_scan_iaa_modules 00:13:14.886 ************************************ 00:13:14.886 20:30:33 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:14.886 20:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:14.886 20:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:14.886 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.886 ************************************ 00:13:14.886 START TEST accel_assign_opcode 00:13:14.886 ************************************ 00:13:14.886 20:30:33 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:13:14.886 20:30:33 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:14.886 20:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.886 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.886 [2024-04-26 20:30:33.149409] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:14.886 20:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.886 20:30:33 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:14.886 20:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.886 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.886 [2024-04-26 20:30:33.157369] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:14.886 20:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.886 20:30:33 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:14.886 20:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.886 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:13:23.024 20:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.024 20:30:41 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:23.024 20:30:41 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:23.024 20:30:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.024 20:30:41 -- common/autotest_common.sh@10 -- # set +x 00:13:23.024 20:30:41 -- accel/accel_rpc.sh@42 -- # grep software 00:13:23.024 20:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.024 software 00:13:23.024 00:13:23.024 real 0m8.182s 00:13:23.024 user 0m0.033s 00:13:23.024 sys 0m0.011s 00:13:23.024 20:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.024 20:30:41 -- common/autotest_common.sh@10 -- # set +x 00:13:23.024 ************************************ 00:13:23.024 END TEST accel_assign_opcode 00:13:23.024 ************************************ 00:13:23.024 20:30:41 -- accel/accel_rpc.sh@55 -- # killprocess 3421462 00:13:23.024 20:30:41 -- common/autotest_common.sh@926 -- # '[' -z 3421462 ']' 00:13:23.024 20:30:41 -- common/autotest_common.sh@930 -- # kill -0 3421462 00:13:23.024 20:30:41 -- common/autotest_common.sh@931 -- # uname 00:13:23.024 20:30:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:23.024 20:30:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3421462 00:13:23.285 20:30:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:23.285 20:30:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:23.285 20:30:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3421462' 00:13:23.285 killing process with pid 3421462 00:13:23.285 20:30:41 -- common/autotest_common.sh@945 -- # kill 3421462 00:13:23.285 20:30:41 -- common/autotest_common.sh@950 -- # wait 3421462 00:13:26.583 00:13:26.583 real 0m12.610s 00:13:26.583 user 0m4.055s 00:13:26.583 sys 0m0.700s 00:13:26.583 20:30:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.583 20:30:44 -- common/autotest_common.sh@10 -- # set +x 00:13:26.583 ************************************ 00:13:26.583 END TEST accel_rpc 00:13:26.583 ************************************ 00:13:26.583 20:30:44 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:13:26.583 20:30:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:26.583 20:30:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:26.583 20:30:44 -- common/autotest_common.sh@10 -- # set +x 00:13:26.583 ************************************ 00:13:26.583 START TEST app_cmdline 00:13:26.583 ************************************ 00:13:26.583 20:30:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:13:26.583 * Looking for test storage... 00:13:26.583 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:26.583 20:30:44 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:26.583 20:30:44 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3423982 00:13:26.583 20:30:44 -- app/cmdline.sh@18 -- # waitforlisten 3423982 00:13:26.583 20:30:44 -- common/autotest_common.sh@819 -- # '[' -z 3423982 ']' 00:13:26.583 20:30:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.583 20:30:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:26.583 20:30:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.583 20:30:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:26.583 20:30:44 -- common/autotest_common.sh@10 -- # set +x 00:13:26.583 20:30:44 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:26.844 [2024-04-26 20:30:44.958007] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:26.844 [2024-04-26 20:30:44.958155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423982 ] 00:13:26.844 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.844 [2024-04-26 20:30:45.091634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.844 [2024-04-26 20:30:45.181543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.844 [2024-04-26 20:30:45.181757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.412 20:30:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:27.412 20:30:45 -- common/autotest_common.sh@852 -- # return 0 00:13:27.412 20:30:45 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:13:27.412 { 00:13:27.412 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:13:27.412 "fields": { 00:13:27.412 "major": 24, 00:13:27.412 "minor": 1, 00:13:27.412 "patch": 1, 00:13:27.412 "suffix": "-pre", 00:13:27.412 "commit": "36faa8c31" 00:13:27.412 } 00:13:27.412 } 00:13:27.412 20:30:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:27.412 20:30:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:27.412 20:30:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:27.412 20:30:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:27.412 20:30:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:27.412 20:30:45 -- app/cmdline.sh@26 -- # sort 00:13:27.412 20:30:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:27.412 20:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.412 20:30:45 -- common/autotest_common.sh@10 -- # set +x 00:13:27.672 20:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.672 20:30:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:27.672 20:30:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:27.672 20:30:45 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:27.672 20:30:45 -- common/autotest_common.sh@640 -- # local es=0 00:13:27.672 20:30:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:27.672 20:30:45 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:27.672 20:30:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:27.672 20:30:45 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:27.672 20:30:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:27.672 20:30:45 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:27.672 20:30:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:27.672 20:30:45 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:27.672 20:30:45 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:13:27.672 20:30:45 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:27.672 request: 00:13:27.672 { 00:13:27.672 "method": "env_dpdk_get_mem_stats", 00:13:27.672 "req_id": 1 00:13:27.672 } 00:13:27.672 Got JSON-RPC error response 00:13:27.672 response: 00:13:27.672 { 00:13:27.672 "code": -32601, 00:13:27.672 "message": "Method not found" 00:13:27.672 } 00:13:27.672 20:30:45 -- common/autotest_common.sh@643 -- # es=1 00:13:27.672 20:30:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:27.672 20:30:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:27.672 20:30:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:27.672 20:30:45 -- app/cmdline.sh@1 -- # killprocess 3423982 00:13:27.672 20:30:45 -- common/autotest_common.sh@926 -- # '[' -z 3423982 ']' 00:13:27.672 20:30:45 -- common/autotest_common.sh@930 -- # kill -0 3423982 00:13:27.672 20:30:45 -- common/autotest_common.sh@931 -- # uname 00:13:27.672 20:30:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:27.672 20:30:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3423982 00:13:27.672 20:30:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:27.672 20:30:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:27.672 20:30:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3423982' 00:13:27.672 killing process with pid 3423982 00:13:27.672 20:30:45 -- common/autotest_common.sh@945 -- # kill 3423982 00:13:27.672 20:30:45 -- common/autotest_common.sh@950 -- # wait 3423982 00:13:28.613 00:13:28.613 real 0m2.028s 00:13:28.613 user 0m2.118s 00:13:28.613 sys 0m0.481s 00:13:28.613 20:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.613 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.613 ************************************ 00:13:28.613 END TEST app_cmdline 00:13:28.613 ************************************ 00:13:28.613 20:30:46 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:13:28.613 20:30:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:28.613 20:30:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.613 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.613 ************************************ 00:13:28.613 START TEST version 00:13:28.613 ************************************ 00:13:28.613 20:30:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:13:28.613 * Looking for test storage... 00:13:28.613 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:28.613 20:30:46 -- app/version.sh@17 -- # get_header_version major 00:13:28.613 20:30:46 -- app/version.sh@14 -- # cut -f2 00:13:28.613 20:30:46 -- app/version.sh@14 -- # tr -d '"' 00:13:28.613 20:30:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:28.613 20:30:46 -- app/version.sh@17 -- # major=24 00:13:28.613 20:30:46 -- app/version.sh@18 -- # get_header_version minor 00:13:28.613 20:30:46 -- app/version.sh@14 -- # cut -f2 00:13:28.613 20:30:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:28.613 20:30:46 -- app/version.sh@14 -- # tr -d '"' 00:13:28.613 20:30:46 -- app/version.sh@18 -- # minor=1 00:13:28.613 20:30:46 -- app/version.sh@19 -- # get_header_version patch 00:13:28.613 20:30:46 -- app/version.sh@14 -- # cut -f2 00:13:28.613 20:30:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:28.613 20:30:46 -- app/version.sh@14 -- # tr -d '"' 00:13:28.871 20:30:46 -- app/version.sh@19 -- # patch=1 00:13:28.871 20:30:46 -- app/version.sh@20 -- # get_header_version suffix 00:13:28.871 20:30:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:28.871 20:30:46 -- app/version.sh@14 -- # cut -f2 00:13:28.871 20:30:46 -- app/version.sh@14 -- # tr -d '"' 00:13:28.871 20:30:46 -- app/version.sh@20 -- # suffix=-pre 00:13:28.871 20:30:46 -- app/version.sh@22 -- # version=24.1 00:13:28.871 20:30:46 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:28.871 20:30:46 -- app/version.sh@25 -- # version=24.1.1 00:13:28.871 20:30:46 -- app/version.sh@28 -- # version=24.1.1rc0 00:13:28.871 20:30:46 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:28.871 20:30:46 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:28.871 20:30:46 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:13:28.871 20:30:46 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:13:28.871 00:13:28.871 real 0m0.143s 00:13:28.871 user 0m0.061s 00:13:28.871 sys 0m0.109s 00:13:28.871 20:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.871 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.871 ************************************ 00:13:28.871 END TEST version 00:13:28.871 ************************************ 00:13:28.871 20:30:47 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:13:28.871 20:30:47 -- spdk/autotest.sh@204 -- # uname -s 00:13:28.871 20:30:47 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:13:28.871 20:30:47 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:13:28.871 20:30:47 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:13:28.871 20:30:47 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:13:28.871 20:30:47 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:13:28.871 20:30:47 -- spdk/autotest.sh@268 -- # timing_exit lib 00:13:28.871 20:30:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:28.871 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.871 20:30:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:13:28.871 20:30:47 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:13:28.871 20:30:47 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:13:28.871 20:30:47 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:13:28.871 20:30:47 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:13:28.871 20:30:47 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:13:28.871 20:30:47 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:28.871 20:30:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:28.871 20:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.871 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.871 ************************************ 00:13:28.871 START TEST nvmf_tcp 00:13:28.871 ************************************ 00:13:28.871 20:30:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:28.871 * Looking for test storage... 00:13:28.871 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:13:28.871 20:30:47 -- nvmf/nvmf.sh@10 -- # uname -s 00:13:28.871 20:30:47 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:28.871 20:30:47 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.871 20:30:47 -- nvmf/common.sh@7 -- # uname -s 00:13:28.871 20:30:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.871 20:30:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.871 20:30:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.871 20:30:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.871 20:30:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.871 20:30:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.871 20:30:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.871 20:30:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.871 20:30:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.871 20:30:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.871 20:30:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:28.871 20:30:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:28.871 20:30:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.871 20:30:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.871 20:30:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:28.871 20:30:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:28.871 20:30:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.871 20:30:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.871 20:30:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.871 20:30:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.871 20:30:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.871 20:30:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.871 20:30:47 -- paths/export.sh@5 -- # export PATH 00:13:28.871 20:30:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.871 20:30:47 -- nvmf/common.sh@46 -- # : 0 00:13:28.871 20:30:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:28.871 20:30:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:28.871 20:30:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:28.871 20:30:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.871 20:30:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.871 20:30:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:28.871 20:30:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:28.871 20:30:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:28.871 20:30:47 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:28.871 20:30:47 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:28.871 20:30:47 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:28.871 20:30:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:28.871 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.871 20:30:47 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:28.871 20:30:47 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:28.871 20:30:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:28.871 20:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.871 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.871 ************************************ 00:13:28.871 START TEST nvmf_example 00:13:28.871 ************************************ 00:13:28.871 20:30:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:28.871 * Looking for test storage... 00:13:29.130 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.130 20:30:47 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.130 20:30:47 -- nvmf/common.sh@7 -- # uname -s 00:13:29.130 20:30:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.130 20:30:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.130 20:30:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.130 20:30:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.130 20:30:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.130 20:30:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.130 20:30:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.130 20:30:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.131 20:30:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.131 20:30:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.131 20:30:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:29.131 20:30:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:29.131 20:30:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.131 20:30:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.131 20:30:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:29.131 20:30:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:29.131 20:30:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.131 20:30:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.131 20:30:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.131 20:30:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.131 20:30:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.131 20:30:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.131 20:30:47 -- paths/export.sh@5 -- # export PATH 00:13:29.131 20:30:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.131 20:30:47 -- nvmf/common.sh@46 -- # : 0 00:13:29.131 20:30:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:29.131 20:30:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:29.131 20:30:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:29.131 20:30:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.131 20:30:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.131 20:30:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:29.131 20:30:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:29.131 20:30:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:29.131 20:30:47 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:29.131 20:30:47 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:29.131 20:30:47 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:29.131 20:30:47 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:29.131 20:30:47 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:29.131 20:30:47 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:29.131 20:30:47 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:29.131 20:30:47 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:29.131 20:30:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:29.131 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:29.131 20:30:47 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:29.131 20:30:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:29.131 20:30:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.131 20:30:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:29.131 20:30:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:29.131 20:30:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:29.131 20:30:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.131 20:30:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.131 20:30:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.131 20:30:47 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:29.131 20:30:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:29.131 20:30:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:29.131 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:34.416 20:30:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.416 20:30:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:34.416 20:30:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:34.416 20:30:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:34.416 20:30:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:34.416 20:30:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:34.416 20:30:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:34.416 20:30:52 -- nvmf/common.sh@294 -- # net_devs=() 00:13:34.416 20:30:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:34.416 20:30:52 -- nvmf/common.sh@295 -- # e810=() 00:13:34.416 20:30:52 -- nvmf/common.sh@295 -- # local -ga e810 00:13:34.416 20:30:52 -- nvmf/common.sh@296 -- # x722=() 00:13:34.416 20:30:52 -- nvmf/common.sh@296 -- # local -ga x722 00:13:34.416 20:30:52 -- nvmf/common.sh@297 -- # mlx=() 00:13:34.416 20:30:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:34.416 20:30:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.416 20:30:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:34.416 20:30:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:34.416 20:30:52 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:34.417 20:30:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.417 20:30:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:34.417 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:34.417 20:30:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.417 20:30:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:34.417 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:34.417 20:30:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:34.417 20:30:52 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.417 20:30:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.417 20:30:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.417 20:30:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.417 20:30:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:34.417 Found net devices under 0000:27:00.0: cvl_0_0 00:13:34.417 20:30:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.417 20:30:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.417 20:30:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.417 20:30:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.417 20:30:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.417 20:30:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:34.417 Found net devices under 0000:27:00.1: cvl_0_1 00:13:34.417 20:30:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.417 20:30:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:34.417 20:30:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:34.417 20:30:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:34.417 20:30:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.417 20:30:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.417 20:30:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.417 20:30:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:34.417 20:30:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.417 20:30:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.417 20:30:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:34.417 20:30:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.417 20:30:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.417 20:30:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:34.417 20:30:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:34.417 20:30:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.417 20:30:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.417 20:30:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.417 20:30:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.417 20:30:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:34.417 20:30:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.417 20:30:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.417 20:30:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.417 20:30:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:34.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:13:34.417 00:13:34.417 --- 10.0.0.2 ping statistics --- 00:13:34.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.417 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:13:34.417 20:30:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:13:34.417 00:13:34.417 --- 10.0.0.1 ping statistics --- 00:13:34.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.417 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:13:34.417 20:30:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.417 20:30:52 -- nvmf/common.sh@410 -- # return 0 00:13:34.417 20:30:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:34.417 20:30:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.417 20:30:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:34.417 20:30:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.417 20:30:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:34.417 20:30:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:34.417 20:30:52 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:34.417 20:30:52 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:34.417 20:30:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:34.417 20:30:52 -- common/autotest_common.sh@10 -- # set +x 00:13:34.417 20:30:52 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:34.417 20:30:52 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:34.417 20:30:52 -- target/nvmf_example.sh@34 -- # nvmfpid=3427973 00:13:34.417 20:30:52 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:34.417 20:30:52 -- target/nvmf_example.sh@36 -- # waitforlisten 3427973 00:13:34.417 20:30:52 -- common/autotest_common.sh@819 -- # '[' -z 3427973 ']' 00:13:34.417 20:30:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.417 20:30:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:34.417 20:30:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.417 20:30:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:34.417 20:30:52 -- common/autotest_common.sh@10 -- # set +x 00:13:34.417 20:30:52 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:34.677 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.241 20:30:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:35.241 20:30:53 -- common/autotest_common.sh@852 -- # return 0 00:13:35.241 20:30:53 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:35.241 20:30:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:35.241 20:30:53 -- common/autotest_common.sh@10 -- # set +x 00:13:35.241 20:30:53 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.241 20:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.241 20:30:53 -- common/autotest_common.sh@10 -- # set +x 00:13:35.241 20:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.241 20:30:53 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:35.241 20:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.241 20:30:53 -- common/autotest_common.sh@10 -- # set +x 00:13:35.499 20:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.499 20:30:53 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:35.499 20:30:53 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:35.499 20:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.499 20:30:53 -- common/autotest_common.sh@10 -- # set +x 00:13:35.499 20:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.500 20:30:53 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:35.500 20:30:53 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.500 20:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.500 20:30:53 -- common/autotest_common.sh@10 -- # set +x 00:13:35.500 20:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.500 20:30:53 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.500 20:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.500 20:30:53 -- common/autotest_common.sh@10 -- # set +x 00:13:35.500 20:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.500 20:30:53 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:35.500 20:30:53 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:35.500 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.808 Initializing NVMe Controllers 00:13:47.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:47.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:47.808 Initialization complete. Launching workers. 00:13:47.808 ======================================================== 00:13:47.808 Latency(us) 00:13:47.808 Device Information : IOPS MiB/s Average min max 00:13:47.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19128.73 74.72 3345.38 699.47 15439.60 00:13:47.808 ======================================================== 00:13:47.808 Total : 19128.73 74.72 3345.38 699.47 15439.60 00:13:47.808 00:13:47.808 20:31:03 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:47.808 20:31:03 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:47.808 20:31:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:47.808 20:31:03 -- nvmf/common.sh@116 -- # sync 00:13:47.808 20:31:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:47.808 20:31:03 -- nvmf/common.sh@119 -- # set +e 00:13:47.808 20:31:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:47.808 20:31:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:47.808 rmmod nvme_tcp 00:13:47.808 rmmod nvme_fabrics 00:13:47.808 rmmod nvme_keyring 00:13:47.808 20:31:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:47.808 20:31:04 -- nvmf/common.sh@123 -- # set -e 00:13:47.808 20:31:04 -- nvmf/common.sh@124 -- # return 0 00:13:47.808 20:31:04 -- nvmf/common.sh@477 -- # '[' -n 3427973 ']' 00:13:47.808 20:31:04 -- nvmf/common.sh@478 -- # killprocess 3427973 00:13:47.808 20:31:04 -- common/autotest_common.sh@926 -- # '[' -z 3427973 ']' 00:13:47.808 20:31:04 -- common/autotest_common.sh@930 -- # kill -0 3427973 00:13:47.808 20:31:04 -- common/autotest_common.sh@931 -- # uname 00:13:47.808 20:31:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:47.808 20:31:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3427973 00:13:47.808 20:31:04 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:13:47.808 20:31:04 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:13:47.808 20:31:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3427973' 00:13:47.808 killing process with pid 3427973 00:13:47.808 20:31:04 -- common/autotest_common.sh@945 -- # kill 3427973 00:13:47.808 20:31:04 -- common/autotest_common.sh@950 -- # wait 3427973 00:13:47.808 nvmf threads initialize successfully 00:13:47.808 bdev subsystem init successfully 00:13:47.808 created a nvmf target service 00:13:47.808 create targets's poll groups done 00:13:47.808 all subsystems of target started 00:13:47.808 nvmf target is running 00:13:47.808 all subsystems of target stopped 00:13:47.808 destroy targets's poll groups done 00:13:47.808 destroyed the nvmf target service 00:13:47.808 bdev subsystem finish successfully 00:13:47.808 nvmf threads destroy successfully 00:13:47.808 20:31:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:47.808 20:31:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:47.808 20:31:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:47.808 20:31:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.808 20:31:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:47.808 20:31:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.808 20:31:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.808 20:31:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.378 20:31:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:48.378 20:31:06 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:48.378 20:31:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:48.378 20:31:06 -- common/autotest_common.sh@10 -- # set +x 00:13:48.378 00:13:48.378 real 0m19.499s 00:13:48.378 user 0m46.795s 00:13:48.378 sys 0m5.145s 00:13:48.378 20:31:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.378 20:31:06 -- common/autotest_common.sh@10 -- # set +x 00:13:48.378 ************************************ 00:13:48.378 END TEST nvmf_example 00:13:48.378 ************************************ 00:13:48.378 20:31:06 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:48.378 20:31:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:48.378 20:31:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.378 20:31:06 -- common/autotest_common.sh@10 -- # set +x 00:13:48.378 ************************************ 00:13:48.378 START TEST nvmf_filesystem 00:13:48.378 ************************************ 00:13:48.378 20:31:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:48.640 * Looking for test storage... 00:13:48.640 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:48.640 20:31:06 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:13:48.640 20:31:06 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:48.640 20:31:06 -- common/autotest_common.sh@34 -- # set -e 00:13:48.640 20:31:06 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:48.640 20:31:06 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:48.640 20:31:06 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:48.640 20:31:06 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:13:48.640 20:31:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:48.640 20:31:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:48.640 20:31:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:48.640 20:31:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:48.640 20:31:06 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:48.640 20:31:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:48.641 20:31:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:48.641 20:31:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:48.641 20:31:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:48.641 20:31:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:48.641 20:31:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:48.641 20:31:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:48.641 20:31:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:48.641 20:31:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:48.641 20:31:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:48.641 20:31:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:48.641 20:31:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:48.641 20:31:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:48.641 20:31:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:13:48.641 20:31:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:48.641 20:31:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:48.641 20:31:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:48.641 20:31:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:48.641 20:31:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:48.641 20:31:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:48.641 20:31:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:48.641 20:31:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:48.641 20:31:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:48.641 20:31:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:48.641 20:31:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:48.641 20:31:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:48.641 20:31:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:48.641 20:31:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:48.641 20:31:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:48.641 20:31:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:48.641 20:31:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:13:48.641 20:31:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:48.641 20:31:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:48.641 20:31:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:48.641 20:31:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:48.641 20:31:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:48.641 20:31:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:48.641 20:31:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:48.641 20:31:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:48.641 20:31:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:48.641 20:31:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:13:48.641 20:31:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:13:48.641 20:31:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:48.641 20:31:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:13:48.641 20:31:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:13:48.641 20:31:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:13:48.641 20:31:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:13:48.641 20:31:06 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:13:48.641 20:31:06 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:13:48.641 20:31:06 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:13:48.641 20:31:06 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:13:48.641 20:31:06 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:13:48.641 20:31:06 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:13:48.641 20:31:06 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:13:48.641 20:31:06 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:13:48.641 20:31:06 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:13:48.641 20:31:06 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:13:48.641 20:31:06 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:13:48.641 20:31:06 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:13:48.641 20:31:06 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:13:48.641 20:31:06 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:48.641 20:31:06 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:13:48.641 20:31:06 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:13:48.641 20:31:06 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:13:48.641 20:31:06 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:13:48.641 20:31:06 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:13:48.641 20:31:06 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:13:48.641 20:31:06 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:13:48.641 20:31:06 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:13:48.641 20:31:06 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:13:48.641 20:31:06 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:13:48.641 20:31:06 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:48.641 20:31:06 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:13:48.641 20:31:06 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:13:48.641 20:31:06 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:13:48.641 20:31:06 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:13:48.641 20:31:06 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:13:48.641 20:31:06 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:13:48.641 20:31:06 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:13:48.641 20:31:06 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:48.641 20:31:06 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:48.641 20:31:06 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:48.641 20:31:06 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:48.641 20:31:06 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:48.641 20:31:06 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:48.641 20:31:06 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:48.641 20:31:06 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:48.641 20:31:06 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:48.641 20:31:06 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:13:48.641 20:31:06 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:48.641 #define SPDK_CONFIG_H 00:13:48.641 #define SPDK_CONFIG_APPS 1 00:13:48.641 #define SPDK_CONFIG_ARCH native 00:13:48.641 #define SPDK_CONFIG_ASAN 1 00:13:48.641 #undef SPDK_CONFIG_AVAHI 00:13:48.641 #undef SPDK_CONFIG_CET 00:13:48.641 #define SPDK_CONFIG_COVERAGE 1 00:13:48.641 #define SPDK_CONFIG_CROSS_PREFIX 00:13:48.641 #undef SPDK_CONFIG_CRYPTO 00:13:48.641 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:48.641 #undef SPDK_CONFIG_CUSTOMOCF 00:13:48.641 #undef SPDK_CONFIG_DAOS 00:13:48.641 #define SPDK_CONFIG_DAOS_DIR 00:13:48.641 #define SPDK_CONFIG_DEBUG 1 00:13:48.641 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:48.641 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:13:48.641 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:48.641 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:48.641 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:48.641 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:13:48.641 #define SPDK_CONFIG_EXAMPLES 1 00:13:48.641 #undef SPDK_CONFIG_FC 00:13:48.641 #define SPDK_CONFIG_FC_PATH 00:13:48.641 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:48.641 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:48.641 #undef SPDK_CONFIG_FUSE 00:13:48.641 #undef SPDK_CONFIG_FUZZER 00:13:48.641 #define SPDK_CONFIG_FUZZER_LIB 00:13:48.641 #undef SPDK_CONFIG_GOLANG 00:13:48.641 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:48.641 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:48.641 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:48.641 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:48.641 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:48.641 #define SPDK_CONFIG_IDXD 1 00:13:48.641 #undef SPDK_CONFIG_IDXD_KERNEL 00:13:48.641 #undef SPDK_CONFIG_IPSEC_MB 00:13:48.641 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:48.641 #define SPDK_CONFIG_ISAL 1 00:13:48.641 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:48.641 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:48.641 #define SPDK_CONFIG_LIBDIR 00:13:48.641 #undef SPDK_CONFIG_LTO 00:13:48.641 #define SPDK_CONFIG_MAX_LCORES 00:13:48.641 #define SPDK_CONFIG_NVME_CUSE 1 00:13:48.641 #undef SPDK_CONFIG_OCF 00:13:48.641 #define SPDK_CONFIG_OCF_PATH 00:13:48.641 #define SPDK_CONFIG_OPENSSL_PATH 00:13:48.641 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:48.641 #undef SPDK_CONFIG_PGO_USE 00:13:48.641 #define SPDK_CONFIG_PREFIX /usr/local 00:13:48.641 #undef SPDK_CONFIG_RAID5F 00:13:48.641 #undef SPDK_CONFIG_RBD 00:13:48.641 #define SPDK_CONFIG_RDMA 1 00:13:48.641 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:48.641 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:48.641 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:48.641 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:48.641 #define SPDK_CONFIG_SHARED 1 00:13:48.641 #undef SPDK_CONFIG_SMA 00:13:48.641 #define SPDK_CONFIG_TESTS 1 00:13:48.641 #undef SPDK_CONFIG_TSAN 00:13:48.641 #define SPDK_CONFIG_UBLK 1 00:13:48.641 #define SPDK_CONFIG_UBSAN 1 00:13:48.641 #undef SPDK_CONFIG_UNIT_TESTS 00:13:48.641 #undef SPDK_CONFIG_URING 00:13:48.641 #define SPDK_CONFIG_URING_PATH 00:13:48.641 #undef SPDK_CONFIG_URING_ZNS 00:13:48.641 #undef SPDK_CONFIG_USDT 00:13:48.641 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:48.641 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:48.641 #undef SPDK_CONFIG_VFIO_USER 00:13:48.641 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:48.641 #define SPDK_CONFIG_VHOST 1 00:13:48.641 #define SPDK_CONFIG_VIRTIO 1 00:13:48.641 #undef SPDK_CONFIG_VTUNE 00:13:48.641 #define SPDK_CONFIG_VTUNE_DIR 00:13:48.641 #define SPDK_CONFIG_WERROR 1 00:13:48.641 #define SPDK_CONFIG_WPDK_DIR 00:13:48.641 #undef SPDK_CONFIG_XNVME 00:13:48.641 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:48.641 20:31:06 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:48.641 20:31:06 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:48.641 20:31:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.642 20:31:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.642 20:31:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.642 20:31:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.642 20:31:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.642 20:31:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.642 20:31:06 -- paths/export.sh@5 -- # export PATH 00:13:48.642 20:31:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.642 20:31:06 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:13:48.642 20:31:06 -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:13:48.642 20:31:06 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:13:48.642 20:31:06 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:13:48.642 20:31:06 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:48.642 20:31:06 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:13:48.642 20:31:06 -- pm/common@16 -- # TEST_TAG=N/A 00:13:48.642 20:31:06 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:13:48.642 20:31:06 -- common/autotest_common.sh@52 -- # : 1 00:13:48.642 20:31:06 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:13:48.642 20:31:06 -- common/autotest_common.sh@56 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:48.642 20:31:06 -- common/autotest_common.sh@58 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:13:48.642 20:31:06 -- common/autotest_common.sh@60 -- # : 1 00:13:48.642 20:31:06 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:48.642 20:31:06 -- common/autotest_common.sh@62 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:13:48.642 20:31:06 -- common/autotest_common.sh@64 -- # : 00:13:48.642 20:31:06 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:13:48.642 20:31:06 -- common/autotest_common.sh@66 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:13:48.642 20:31:06 -- common/autotest_common.sh@68 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:13:48.642 20:31:06 -- common/autotest_common.sh@70 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:13:48.642 20:31:06 -- common/autotest_common.sh@72 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:48.642 20:31:06 -- common/autotest_common.sh@74 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:13:48.642 20:31:06 -- common/autotest_common.sh@76 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:13:48.642 20:31:06 -- common/autotest_common.sh@78 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:13:48.642 20:31:06 -- common/autotest_common.sh@80 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:13:48.642 20:31:06 -- common/autotest_common.sh@82 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:13:48.642 20:31:06 -- common/autotest_common.sh@84 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:13:48.642 20:31:06 -- common/autotest_common.sh@86 -- # : 1 00:13:48.642 20:31:06 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:13:48.642 20:31:06 -- common/autotest_common.sh@88 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:13:48.642 20:31:06 -- common/autotest_common.sh@90 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:48.642 20:31:06 -- common/autotest_common.sh@92 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:13:48.642 20:31:06 -- common/autotest_common.sh@94 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:13:48.642 20:31:06 -- common/autotest_common.sh@96 -- # : tcp 00:13:48.642 20:31:06 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:48.642 20:31:06 -- common/autotest_common.sh@98 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:13:48.642 20:31:06 -- common/autotest_common.sh@100 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:13:48.642 20:31:06 -- common/autotest_common.sh@102 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:13:48.642 20:31:06 -- common/autotest_common.sh@104 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:13:48.642 20:31:06 -- common/autotest_common.sh@106 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:13:48.642 20:31:06 -- common/autotest_common.sh@108 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:13:48.642 20:31:06 -- common/autotest_common.sh@110 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:13:48.642 20:31:06 -- common/autotest_common.sh@112 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:48.642 20:31:06 -- common/autotest_common.sh@114 -- # : 1 00:13:48.642 20:31:06 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:13:48.642 20:31:06 -- common/autotest_common.sh@116 -- # : 1 00:13:48.642 20:31:06 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:13:48.642 20:31:06 -- common/autotest_common.sh@118 -- # : 00:13:48.642 20:31:06 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:48.642 20:31:06 -- common/autotest_common.sh@120 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:13:48.642 20:31:06 -- common/autotest_common.sh@122 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:13:48.642 20:31:06 -- common/autotest_common.sh@124 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:13:48.642 20:31:06 -- common/autotest_common.sh@126 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:13:48.642 20:31:06 -- common/autotest_common.sh@128 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:13:48.642 20:31:06 -- common/autotest_common.sh@130 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:13:48.642 20:31:06 -- common/autotest_common.sh@132 -- # : 00:13:48.642 20:31:06 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:13:48.642 20:31:06 -- common/autotest_common.sh@134 -- # : true 00:13:48.642 20:31:06 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:13:48.642 20:31:06 -- common/autotest_common.sh@136 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:13:48.642 20:31:06 -- common/autotest_common.sh@138 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:13:48.642 20:31:06 -- common/autotest_common.sh@140 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:13:48.642 20:31:06 -- common/autotest_common.sh@142 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:13:48.642 20:31:06 -- common/autotest_common.sh@144 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:13:48.642 20:31:06 -- common/autotest_common.sh@146 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:13:48.642 20:31:06 -- common/autotest_common.sh@148 -- # : 00:13:48.642 20:31:06 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:13:48.642 20:31:06 -- common/autotest_common.sh@150 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:13:48.642 20:31:06 -- common/autotest_common.sh@152 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:13:48.642 20:31:06 -- common/autotest_common.sh@154 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:13:48.642 20:31:06 -- common/autotest_common.sh@156 -- # : 1 00:13:48.642 20:31:06 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:13:48.642 20:31:06 -- common/autotest_common.sh@158 -- # : 1 00:13:48.642 20:31:06 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:13:48.642 20:31:06 -- common/autotest_common.sh@160 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:13:48.642 20:31:06 -- common/autotest_common.sh@163 -- # : 00:13:48.642 20:31:06 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:13:48.642 20:31:06 -- common/autotest_common.sh@165 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:13:48.642 20:31:06 -- common/autotest_common.sh@167 -- # : 0 00:13:48.642 20:31:06 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:48.642 20:31:06 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:13:48.642 20:31:06 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:13:48.643 20:31:06 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:13:48.643 20:31:06 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:13:48.643 20:31:06 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:48.643 20:31:06 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:48.643 20:31:06 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:48.643 20:31:06 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:48.643 20:31:06 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:48.643 20:31:06 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:48.643 20:31:06 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:48.643 20:31:06 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:48.643 20:31:06 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:48.643 20:31:06 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:13:48.643 20:31:06 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:48.643 20:31:06 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:48.643 20:31:06 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:48.643 20:31:06 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:48.643 20:31:06 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:48.643 20:31:06 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:13:48.643 20:31:06 -- common/autotest_common.sh@196 -- # cat 00:13:48.643 20:31:06 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:13:48.643 20:31:06 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:48.643 20:31:06 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:48.643 20:31:06 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:48.643 20:31:06 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:48.643 20:31:06 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:13:48.643 20:31:06 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:13:48.643 20:31:06 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:48.643 20:31:06 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:48.643 20:31:06 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:48.643 20:31:06 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:48.643 20:31:06 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:48.643 20:31:06 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:48.643 20:31:06 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:48.643 20:31:06 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:48.643 20:31:06 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:48.643 20:31:06 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:48.643 20:31:06 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:48.643 20:31:06 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:48.643 20:31:06 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:13:48.643 20:31:06 -- common/autotest_common.sh@249 -- # export valgrind= 00:13:48.643 20:31:06 -- common/autotest_common.sh@249 -- # valgrind= 00:13:48.643 20:31:06 -- common/autotest_common.sh@255 -- # uname -s 00:13:48.643 20:31:06 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:13:48.643 20:31:06 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:13:48.643 20:31:06 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:13:48.643 20:31:06 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:13:48.643 20:31:06 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:13:48.643 20:31:06 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:13:48.643 20:31:06 -- common/autotest_common.sh@265 -- # MAKE=make 00:13:48.643 20:31:06 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j128 00:13:48.643 20:31:06 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:13:48.643 20:31:06 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:13:48.643 20:31:06 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:13:48.643 20:31:06 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:13:48.643 20:31:06 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:13:48.643 20:31:06 -- common/autotest_common.sh@291 -- # for i in "$@" 00:13:48.643 20:31:06 -- common/autotest_common.sh@292 -- # case "$i" in 00:13:48.643 20:31:06 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:13:48.643 20:31:06 -- common/autotest_common.sh@309 -- # [[ -z 3430812 ]] 00:13:48.643 20:31:06 -- common/autotest_common.sh@309 -- # kill -0 3430812 00:13:48.643 20:31:06 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:13:48.643 20:31:06 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:13:48.643 20:31:06 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:13:48.643 20:31:06 -- common/autotest_common.sh@322 -- # local mount target_dir 00:13:48.643 20:31:06 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:13:48.643 20:31:06 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:13:48.643 20:31:06 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:13:48.643 20:31:06 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:13:48.643 20:31:06 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.bmc4tc 00:13:48.643 20:31:06 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:48.643 20:31:06 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:13:48.643 20:31:06 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:13:48.643 20:31:06 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.bmc4tc/tests/target /tmp/spdk.bmc4tc 00:13:48.643 20:31:06 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:13:48.643 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.643 20:31:06 -- common/autotest_common.sh@318 -- # df -T 00:13:48.643 20:31:06 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:13:48.643 20:31:06 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:13:48.643 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # avails["$mount"]=873099264 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:13:48.643 20:31:06 -- common/autotest_common.sh@354 -- # uses["$mount"]=4411330560 00:13:48.643 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # avails["$mount"]=258094878720 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # sizes["$mount"]=264763871232 00:13:48.643 20:31:06 -- common/autotest_common.sh@354 -- # uses["$mount"]=6668992512 00:13:48.643 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # avails["$mount"]=132379340800 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # sizes["$mount"]=132381933568 00:13:48.643 20:31:06 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:13:48.643 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # avails["$mount"]=52943097856 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # sizes["$mount"]=52952776704 00:13:48.643 20:31:06 -- common/autotest_common.sh@354 -- # uses["$mount"]=9678848 00:13:48.643 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:13:48.643 20:31:06 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:13:48.643 20:31:06 -- common/autotest_common.sh@353 -- # avails["$mount"]=197632 00:13:48.644 20:31:06 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:13:48.644 20:31:06 -- common/autotest_common.sh@354 -- # uses["$mount"]=306176 00:13:48.644 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.644 20:31:06 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:48.644 20:31:06 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:48.644 20:31:06 -- common/autotest_common.sh@353 -- # avails["$mount"]=132381118464 00:13:48.644 20:31:06 -- common/autotest_common.sh@353 -- # sizes["$mount"]=132381937664 00:13:48.644 20:31:06 -- common/autotest_common.sh@354 -- # uses["$mount"]=819200 00:13:48.644 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.644 20:31:06 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:48.644 20:31:06 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:48.644 20:31:06 -- common/autotest_common.sh@353 -- # avails["$mount"]=26476380160 00:13:48.644 20:31:06 -- common/autotest_common.sh@353 -- # sizes["$mount"]=26476384256 00:13:48.644 20:31:06 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:13:48.644 20:31:06 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:48.644 20:31:06 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:13:48.644 * Looking for test storage... 00:13:48.644 20:31:06 -- common/autotest_common.sh@359 -- # local target_space new_size 00:13:48.644 20:31:06 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:13:48.644 20:31:06 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:48.644 20:31:06 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:48.644 20:31:06 -- common/autotest_common.sh@363 -- # mount=/ 00:13:48.644 20:31:06 -- common/autotest_common.sh@365 -- # target_space=258094878720 00:13:48.644 20:31:06 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:13:48.644 20:31:06 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:13:48.644 20:31:06 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:13:48.644 20:31:06 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:13:48.644 20:31:06 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:13:48.644 20:31:06 -- common/autotest_common.sh@372 -- # new_size=8883585024 00:13:48.644 20:31:06 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:48.644 20:31:06 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:48.644 20:31:06 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:48.644 20:31:06 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:48.644 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:48.644 20:31:06 -- common/autotest_common.sh@380 -- # return 0 00:13:48.644 20:31:06 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:13:48.644 20:31:06 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:13:48.644 20:31:06 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:48.644 20:31:06 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:48.644 20:31:06 -- common/autotest_common.sh@1672 -- # true 00:13:48.644 20:31:06 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:13:48.644 20:31:06 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:48.644 20:31:06 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:48.644 20:31:06 -- common/autotest_common.sh@27 -- # exec 00:13:48.644 20:31:06 -- common/autotest_common.sh@29 -- # exec 00:13:48.644 20:31:06 -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:48.644 20:31:06 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:48.644 20:31:06 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:48.644 20:31:06 -- common/autotest_common.sh@18 -- # set -x 00:13:48.644 20:31:06 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.644 20:31:06 -- nvmf/common.sh@7 -- # uname -s 00:13:48.644 20:31:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.644 20:31:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.644 20:31:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.644 20:31:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.644 20:31:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.644 20:31:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.644 20:31:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.644 20:31:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.644 20:31:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.644 20:31:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.644 20:31:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:48.644 20:31:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:48.644 20:31:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.644 20:31:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.644 20:31:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:48.644 20:31:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:48.644 20:31:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.644 20:31:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.644 20:31:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.644 20:31:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.644 20:31:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.644 20:31:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.644 20:31:06 -- paths/export.sh@5 -- # export PATH 00:13:48.644 20:31:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.644 20:31:06 -- nvmf/common.sh@46 -- # : 0 00:13:48.644 20:31:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:48.644 20:31:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:48.644 20:31:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:48.644 20:31:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.644 20:31:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.644 20:31:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:48.644 20:31:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:48.644 20:31:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:48.644 20:31:06 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:48.644 20:31:06 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:48.644 20:31:06 -- target/filesystem.sh@15 -- # nvmftestinit 00:13:48.644 20:31:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:48.644 20:31:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.644 20:31:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:48.644 20:31:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:48.644 20:31:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:48.644 20:31:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.644 20:31:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.644 20:31:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.644 20:31:06 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:48.644 20:31:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:48.644 20:31:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:48.644 20:31:06 -- common/autotest_common.sh@10 -- # set +x 00:13:55.222 20:31:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:55.222 20:31:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:55.222 20:31:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:55.222 20:31:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:55.222 20:31:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:55.222 20:31:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:55.222 20:31:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:55.222 20:31:12 -- nvmf/common.sh@294 -- # net_devs=() 00:13:55.222 20:31:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:55.222 20:31:12 -- nvmf/common.sh@295 -- # e810=() 00:13:55.222 20:31:12 -- nvmf/common.sh@295 -- # local -ga e810 00:13:55.222 20:31:12 -- nvmf/common.sh@296 -- # x722=() 00:13:55.222 20:31:12 -- nvmf/common.sh@296 -- # local -ga x722 00:13:55.222 20:31:12 -- nvmf/common.sh@297 -- # mlx=() 00:13:55.222 20:31:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:55.222 20:31:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.222 20:31:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.223 20:31:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.223 20:31:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:55.223 20:31:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:55.223 20:31:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:55.223 20:31:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:55.223 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:55.223 20:31:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:55.223 20:31:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:55.223 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:55.223 20:31:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:55.223 20:31:12 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:55.223 20:31:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.223 20:31:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:55.223 20:31:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.223 20:31:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:55.223 Found net devices under 0000:27:00.0: cvl_0_0 00:13:55.223 20:31:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.223 20:31:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:55.223 20:31:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.223 20:31:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:55.223 20:31:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.223 20:31:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:55.223 Found net devices under 0000:27:00.1: cvl_0_1 00:13:55.223 20:31:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.223 20:31:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:55.223 20:31:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:55.223 20:31:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:55.223 20:31:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.223 20:31:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.223 20:31:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.223 20:31:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:55.223 20:31:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.223 20:31:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.223 20:31:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:55.223 20:31:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.223 20:31:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.223 20:31:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:55.223 20:31:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:55.223 20:31:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.223 20:31:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.223 20:31:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.223 20:31:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.223 20:31:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:55.223 20:31:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.223 20:31:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.223 20:31:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.223 20:31:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:55.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:13:55.223 00:13:55.223 --- 10.0.0.2 ping statistics --- 00:13:55.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.223 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:13:55.223 20:31:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:13:55.223 00:13:55.223 --- 10.0.0.1 ping statistics --- 00:13:55.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.223 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:13:55.223 20:31:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.223 20:31:12 -- nvmf/common.sh@410 -- # return 0 00:13:55.223 20:31:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:55.223 20:31:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.223 20:31:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:55.223 20:31:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.223 20:31:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:55.223 20:31:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:55.223 20:31:12 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:55.223 20:31:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:55.223 20:31:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:55.223 20:31:12 -- common/autotest_common.sh@10 -- # set +x 00:13:55.223 ************************************ 00:13:55.223 START TEST nvmf_filesystem_no_in_capsule 00:13:55.223 ************************************ 00:13:55.223 20:31:12 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:13:55.223 20:31:12 -- target/filesystem.sh@47 -- # in_capsule=0 00:13:55.223 20:31:12 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:55.223 20:31:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:55.223 20:31:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:55.223 20:31:12 -- common/autotest_common.sh@10 -- # set +x 00:13:55.223 20:31:12 -- nvmf/common.sh@469 -- # nvmfpid=3434368 00:13:55.223 20:31:12 -- nvmf/common.sh@470 -- # waitforlisten 3434368 00:13:55.223 20:31:12 -- common/autotest_common.sh@819 -- # '[' -z 3434368 ']' 00:13:55.223 20:31:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.223 20:31:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:55.223 20:31:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.223 20:31:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:55.223 20:31:12 -- common/autotest_common.sh@10 -- # set +x 00:13:55.223 20:31:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.223 [2024-04-26 20:31:13.062923] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:55.223 [2024-04-26 20:31:13.063059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.223 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.223 [2024-04-26 20:31:13.202586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.223 [2024-04-26 20:31:13.299230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:55.223 [2024-04-26 20:31:13.299440] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.223 [2024-04-26 20:31:13.299454] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.223 [2024-04-26 20:31:13.299465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.223 [2024-04-26 20:31:13.299529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.223 [2024-04-26 20:31:13.299639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.223 [2024-04-26 20:31:13.299746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.223 [2024-04-26 20:31:13.299756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.483 20:31:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:55.483 20:31:13 -- common/autotest_common.sh@852 -- # return 0 00:13:55.483 20:31:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:55.483 20:31:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:55.483 20:31:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.483 20:31:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.483 20:31:13 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:55.483 20:31:13 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:55.483 20:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.483 20:31:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.483 [2024-04-26 20:31:13.821371] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.744 20:31:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.744 20:31:13 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:55.744 20:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.744 20:31:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.744 Malloc1 00:13:55.744 20:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.744 20:31:14 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:55.744 20:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.744 20:31:14 -- common/autotest_common.sh@10 -- # set +x 00:13:55.744 20:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.744 20:31:14 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.744 20:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.744 20:31:14 -- common/autotest_common.sh@10 -- # set +x 00:13:56.003 20:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.003 20:31:14 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.003 20:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.003 20:31:14 -- common/autotest_common.sh@10 -- # set +x 00:13:56.003 [2024-04-26 20:31:14.090554] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.003 20:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.003 20:31:14 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:56.003 20:31:14 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:13:56.003 20:31:14 -- common/autotest_common.sh@1358 -- # local bdev_info 00:13:56.003 20:31:14 -- common/autotest_common.sh@1359 -- # local bs 00:13:56.003 20:31:14 -- common/autotest_common.sh@1360 -- # local nb 00:13:56.003 20:31:14 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:56.003 20:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.003 20:31:14 -- common/autotest_common.sh@10 -- # set +x 00:13:56.003 20:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.003 20:31:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:13:56.003 { 00:13:56.003 "name": "Malloc1", 00:13:56.003 "aliases": [ 00:13:56.003 "86ec7d95-ebb6-454b-bf01-c24ff4a50d03" 00:13:56.003 ], 00:13:56.003 "product_name": "Malloc disk", 00:13:56.003 "block_size": 512, 00:13:56.003 "num_blocks": 1048576, 00:13:56.003 "uuid": "86ec7d95-ebb6-454b-bf01-c24ff4a50d03", 00:13:56.003 "assigned_rate_limits": { 00:13:56.003 "rw_ios_per_sec": 0, 00:13:56.003 "rw_mbytes_per_sec": 0, 00:13:56.003 "r_mbytes_per_sec": 0, 00:13:56.003 "w_mbytes_per_sec": 0 00:13:56.003 }, 00:13:56.003 "claimed": true, 00:13:56.003 "claim_type": "exclusive_write", 00:13:56.003 "zoned": false, 00:13:56.003 "supported_io_types": { 00:13:56.003 "read": true, 00:13:56.003 "write": true, 00:13:56.003 "unmap": true, 00:13:56.003 "write_zeroes": true, 00:13:56.003 "flush": true, 00:13:56.003 "reset": true, 00:13:56.003 "compare": false, 00:13:56.003 "compare_and_write": false, 00:13:56.003 "abort": true, 00:13:56.003 "nvme_admin": false, 00:13:56.003 "nvme_io": false 00:13:56.003 }, 00:13:56.003 "memory_domains": [ 00:13:56.003 { 00:13:56.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.003 "dma_device_type": 2 00:13:56.003 } 00:13:56.003 ], 00:13:56.003 "driver_specific": {} 00:13:56.003 } 00:13:56.003 ]' 00:13:56.003 20:31:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:13:56.003 20:31:14 -- common/autotest_common.sh@1362 -- # bs=512 00:13:56.003 20:31:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:13:56.003 20:31:14 -- common/autotest_common.sh@1363 -- # nb=1048576 00:13:56.003 20:31:14 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:13:56.003 20:31:14 -- common/autotest_common.sh@1367 -- # echo 512 00:13:56.003 20:31:14 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:56.003 20:31:14 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.383 20:31:15 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:57.383 20:31:15 -- common/autotest_common.sh@1177 -- # local i=0 00:13:57.383 20:31:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.383 20:31:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:57.383 20:31:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:59.283 20:31:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:59.540 20:31:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:59.540 20:31:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.540 20:31:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:59.540 20:31:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.540 20:31:17 -- common/autotest_common.sh@1187 -- # return 0 00:13:59.540 20:31:17 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:59.540 20:31:17 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:59.540 20:31:17 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:59.540 20:31:17 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:59.540 20:31:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:59.540 20:31:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:59.540 20:31:17 -- setup/common.sh@80 -- # echo 536870912 00:13:59.540 20:31:17 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:59.540 20:31:17 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:59.540 20:31:17 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:59.540 20:31:17 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:59.798 20:31:17 -- target/filesystem.sh@69 -- # partprobe 00:14:00.369 20:31:18 -- target/filesystem.sh@70 -- # sleep 1 00:14:01.304 20:31:19 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:01.304 20:31:19 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:01.304 20:31:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:01.304 20:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:01.304 20:31:19 -- common/autotest_common.sh@10 -- # set +x 00:14:01.304 ************************************ 00:14:01.304 START TEST filesystem_ext4 00:14:01.304 ************************************ 00:14:01.304 20:31:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:01.304 20:31:19 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:01.304 20:31:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:01.304 20:31:19 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:01.304 20:31:19 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:14:01.304 20:31:19 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:14:01.304 20:31:19 -- common/autotest_common.sh@904 -- # local i=0 00:14:01.304 20:31:19 -- common/autotest_common.sh@905 -- # local force 00:14:01.304 20:31:19 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:14:01.304 20:31:19 -- common/autotest_common.sh@908 -- # force=-F 00:14:01.304 20:31:19 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:01.304 mke2fs 1.46.5 (30-Dec-2021) 00:14:01.564 Discarding device blocks: 0/522240 done 00:14:01.564 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:01.564 Filesystem UUID: 96a2b0c9-89ff-4618-b942-db3deaa09e48 00:14:01.564 Superblock backups stored on blocks: 00:14:01.564 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:01.564 00:14:01.564 Allocating group tables: 0/64 done 00:14:01.564 Writing inode tables: 0/64 done 00:14:04.861 Creating journal (8192 blocks): done 00:14:05.432 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:14:05.432 00:14:05.432 20:31:23 -- common/autotest_common.sh@921 -- # return 0 00:14:05.433 20:31:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:05.433 20:31:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:05.433 20:31:23 -- target/filesystem.sh@25 -- # sync 00:14:05.433 20:31:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:05.433 20:31:23 -- target/filesystem.sh@27 -- # sync 00:14:05.433 20:31:23 -- target/filesystem.sh@29 -- # i=0 00:14:05.433 20:31:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:05.433 20:31:23 -- target/filesystem.sh@37 -- # kill -0 3434368 00:14:05.433 20:31:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:05.433 20:31:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:05.690 20:31:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:05.690 20:31:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:05.690 00:14:05.690 real 0m4.184s 00:14:05.690 user 0m0.022s 00:14:05.690 sys 0m0.046s 00:14:05.690 20:31:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.690 20:31:23 -- common/autotest_common.sh@10 -- # set +x 00:14:05.690 ************************************ 00:14:05.690 END TEST filesystem_ext4 00:14:05.690 ************************************ 00:14:05.690 20:31:23 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:05.690 20:31:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:05.690 20:31:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:05.690 20:31:23 -- common/autotest_common.sh@10 -- # set +x 00:14:05.690 ************************************ 00:14:05.690 START TEST filesystem_btrfs 00:14:05.690 ************************************ 00:14:05.690 20:31:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:05.690 20:31:23 -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:05.690 20:31:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:05.690 20:31:23 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:05.690 20:31:23 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:14:05.690 20:31:23 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:14:05.690 20:31:23 -- common/autotest_common.sh@904 -- # local i=0 00:14:05.690 20:31:23 -- common/autotest_common.sh@905 -- # local force 00:14:05.690 20:31:23 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:14:05.690 20:31:23 -- common/autotest_common.sh@910 -- # force=-f 00:14:05.690 20:31:23 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:05.949 btrfs-progs v6.6.2 00:14:05.949 See https://btrfs.readthedocs.io for more information. 00:14:05.949 00:14:05.949 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:05.949 NOTE: several default settings have changed in version 5.15, please make sure 00:14:05.949 this does not affect your deployments: 00:14:05.949 - DUP for metadata (-m dup) 00:14:05.949 - enabled no-holes (-O no-holes) 00:14:05.949 - enabled free-space-tree (-R free-space-tree) 00:14:05.949 00:14:05.949 Label: (null) 00:14:05.949 UUID: f6af8336-ec8a-471e-93a2-82403f1df100 00:14:05.949 Node size: 16384 00:14:05.949 Sector size: 4096 00:14:05.949 Filesystem size: 510.00MiB 00:14:05.949 Block group profiles: 00:14:05.949 Data: single 8.00MiB 00:14:05.949 Metadata: DUP 32.00MiB 00:14:05.949 System: DUP 8.00MiB 00:14:05.949 SSD detected: yes 00:14:05.949 Zoned device: no 00:14:05.949 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:05.949 Runtime features: free-space-tree 00:14:05.949 Checksum: crc32c 00:14:05.949 Number of devices: 1 00:14:05.949 Devices: 00:14:05.949 ID SIZE PATH 00:14:05.949 1 510.00MiB /dev/nvme0n1p1 00:14:05.949 00:14:05.949 20:31:24 -- common/autotest_common.sh@921 -- # return 0 00:14:05.949 20:31:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:06.890 20:31:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:06.890 20:31:24 -- target/filesystem.sh@25 -- # sync 00:14:06.890 20:31:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:06.890 20:31:24 -- target/filesystem.sh@27 -- # sync 00:14:06.890 20:31:24 -- target/filesystem.sh@29 -- # i=0 00:14:06.890 20:31:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:06.890 20:31:25 -- target/filesystem.sh@37 -- # kill -0 3434368 00:14:06.890 20:31:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:06.890 20:31:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:06.890 20:31:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:06.890 20:31:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:06.890 00:14:06.890 real 0m1.202s 00:14:06.890 user 0m0.024s 00:14:06.890 sys 0m0.054s 00:14:06.890 20:31:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.890 20:31:25 -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 ************************************ 00:14:06.890 END TEST filesystem_btrfs 00:14:06.890 ************************************ 00:14:06.890 20:31:25 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:06.890 20:31:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:06.890 20:31:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:06.890 20:31:25 -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 ************************************ 00:14:06.890 START TEST filesystem_xfs 00:14:06.890 ************************************ 00:14:06.890 20:31:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:14:06.890 20:31:25 -- target/filesystem.sh@18 -- # fstype=xfs 00:14:06.890 20:31:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:06.890 20:31:25 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:06.890 20:31:25 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:14:06.890 20:31:25 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:14:06.890 20:31:25 -- common/autotest_common.sh@904 -- # local i=0 00:14:06.890 20:31:25 -- common/autotest_common.sh@905 -- # local force 00:14:06.890 20:31:25 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:14:06.890 20:31:25 -- common/autotest_common.sh@910 -- # force=-f 00:14:06.890 20:31:25 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:06.890 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:06.890 = sectsz=512 attr=2, projid32bit=1 00:14:06.890 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:06.890 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:06.890 data = bsize=4096 blocks=130560, imaxpct=25 00:14:06.890 = sunit=0 swidth=0 blks 00:14:06.890 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:06.890 log =internal log bsize=4096 blocks=16384, version=2 00:14:06.890 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:06.890 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:07.828 Discarding blocks...Done. 00:14:07.828 20:31:25 -- common/autotest_common.sh@921 -- # return 0 00:14:07.828 20:31:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:09.738 20:31:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:09.738 20:31:27 -- target/filesystem.sh@25 -- # sync 00:14:09.738 20:31:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:09.738 20:31:27 -- target/filesystem.sh@27 -- # sync 00:14:09.738 20:31:27 -- target/filesystem.sh@29 -- # i=0 00:14:09.738 20:31:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:09.738 20:31:27 -- target/filesystem.sh@37 -- # kill -0 3434368 00:14:09.738 20:31:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:09.739 20:31:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:09.739 20:31:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:09.739 20:31:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:09.739 00:14:09.739 real 0m2.757s 00:14:09.739 user 0m0.019s 00:14:09.739 sys 0m0.053s 00:14:09.739 20:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.739 20:31:27 -- common/autotest_common.sh@10 -- # set +x 00:14:09.739 ************************************ 00:14:09.739 END TEST filesystem_xfs 00:14:09.739 ************************************ 00:14:09.739 20:31:27 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:09.739 20:31:27 -- target/filesystem.sh@93 -- # sync 00:14:09.739 20:31:27 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.739 20:31:28 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.739 20:31:28 -- common/autotest_common.sh@1198 -- # local i=0 00:14:09.739 20:31:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:09.739 20:31:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.739 20:31:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:09.739 20:31:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.739 20:31:28 -- common/autotest_common.sh@1210 -- # return 0 00:14:09.739 20:31:28 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.739 20:31:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.739 20:31:28 -- common/autotest_common.sh@10 -- # set +x 00:14:09.739 20:31:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.739 20:31:28 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:09.739 20:31:28 -- target/filesystem.sh@101 -- # killprocess 3434368 00:14:09.739 20:31:28 -- common/autotest_common.sh@926 -- # '[' -z 3434368 ']' 00:14:09.739 20:31:28 -- common/autotest_common.sh@930 -- # kill -0 3434368 00:14:09.739 20:31:28 -- common/autotest_common.sh@931 -- # uname 00:14:09.739 20:31:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:09.739 20:31:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3434368 00:14:09.999 20:31:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:09.999 20:31:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:09.999 20:31:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3434368' 00:14:09.999 killing process with pid 3434368 00:14:09.999 20:31:28 -- common/autotest_common.sh@945 -- # kill 3434368 00:14:09.999 20:31:28 -- common/autotest_common.sh@950 -- # wait 3434368 00:14:10.938 20:31:29 -- target/filesystem.sh@102 -- # nvmfpid= 00:14:10.938 00:14:10.938 real 0m16.074s 00:14:10.938 user 1m2.202s 00:14:10.938 sys 0m1.097s 00:14:10.938 20:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.938 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.938 ************************************ 00:14:10.938 END TEST nvmf_filesystem_no_in_capsule 00:14:10.938 ************************************ 00:14:10.938 20:31:29 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:10.938 20:31:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:10.938 20:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.938 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.938 ************************************ 00:14:10.938 START TEST nvmf_filesystem_in_capsule 00:14:10.938 ************************************ 00:14:10.938 20:31:29 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:14:10.938 20:31:29 -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:10.938 20:31:29 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:10.938 20:31:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:10.938 20:31:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:10.938 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.938 20:31:29 -- nvmf/common.sh@469 -- # nvmfpid=3437895 00:14:10.938 20:31:29 -- nvmf/common.sh@470 -- # waitforlisten 3437895 00:14:10.938 20:31:29 -- common/autotest_common.sh@819 -- # '[' -z 3437895 ']' 00:14:10.938 20:31:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.938 20:31:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:10.938 20:31:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.938 20:31:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:10.938 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.938 20:31:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.938 [2024-04-26 20:31:29.159859] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:10.938 [2024-04-26 20:31:29.159971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.938 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.198 [2024-04-26 20:31:29.286478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.198 [2024-04-26 20:31:29.378986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:11.198 [2024-04-26 20:31:29.379188] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.198 [2024-04-26 20:31:29.379201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.198 [2024-04-26 20:31:29.379212] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.198 [2024-04-26 20:31:29.379369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.198 [2024-04-26 20:31:29.379466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.198 [2024-04-26 20:31:29.379505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.198 [2024-04-26 20:31:29.379515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.770 20:31:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:11.770 20:31:29 -- common/autotest_common.sh@852 -- # return 0 00:14:11.770 20:31:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:11.770 20:31:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:11.770 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:14:11.770 20:31:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.770 20:31:29 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:11.770 20:31:29 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:11.770 20:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.770 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:14:11.770 [2024-04-26 20:31:29.914461] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.770 20:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.770 20:31:29 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:11.770 20:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.770 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:14:12.027 Malloc1 00:14:12.027 20:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.027 20:31:30 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:12.027 20:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.027 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:14:12.028 20:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.028 20:31:30 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:12.028 20:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.028 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:14:12.028 20:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.028 20:31:30 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.028 20:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.028 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:14:12.028 [2024-04-26 20:31:30.194749] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.028 20:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.028 20:31:30 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:12.028 20:31:30 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:14:12.028 20:31:30 -- common/autotest_common.sh@1358 -- # local bdev_info 00:14:12.028 20:31:30 -- common/autotest_common.sh@1359 -- # local bs 00:14:12.028 20:31:30 -- common/autotest_common.sh@1360 -- # local nb 00:14:12.028 20:31:30 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:12.028 20:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.028 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:14:12.028 20:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.028 20:31:30 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:14:12.028 { 00:14:12.028 "name": "Malloc1", 00:14:12.028 "aliases": [ 00:14:12.028 "abe5365f-a961-4232-b97d-b546f6df0c11" 00:14:12.028 ], 00:14:12.028 "product_name": "Malloc disk", 00:14:12.028 "block_size": 512, 00:14:12.028 "num_blocks": 1048576, 00:14:12.028 "uuid": "abe5365f-a961-4232-b97d-b546f6df0c11", 00:14:12.028 "assigned_rate_limits": { 00:14:12.028 "rw_ios_per_sec": 0, 00:14:12.028 "rw_mbytes_per_sec": 0, 00:14:12.028 "r_mbytes_per_sec": 0, 00:14:12.028 "w_mbytes_per_sec": 0 00:14:12.028 }, 00:14:12.028 "claimed": true, 00:14:12.028 "claim_type": "exclusive_write", 00:14:12.028 "zoned": false, 00:14:12.028 "supported_io_types": { 00:14:12.028 "read": true, 00:14:12.028 "write": true, 00:14:12.028 "unmap": true, 00:14:12.028 "write_zeroes": true, 00:14:12.028 "flush": true, 00:14:12.028 "reset": true, 00:14:12.028 "compare": false, 00:14:12.028 "compare_and_write": false, 00:14:12.028 "abort": true, 00:14:12.028 "nvme_admin": false, 00:14:12.028 "nvme_io": false 00:14:12.028 }, 00:14:12.028 "memory_domains": [ 00:14:12.028 { 00:14:12.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.028 "dma_device_type": 2 00:14:12.028 } 00:14:12.028 ], 00:14:12.028 "driver_specific": {} 00:14:12.028 } 00:14:12.028 ]' 00:14:12.028 20:31:30 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:14:12.028 20:31:30 -- common/autotest_common.sh@1362 -- # bs=512 00:14:12.028 20:31:30 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:14:12.028 20:31:30 -- common/autotest_common.sh@1363 -- # nb=1048576 00:14:12.028 20:31:30 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:14:12.028 20:31:30 -- common/autotest_common.sh@1367 -- # echo 512 00:14:12.028 20:31:30 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:12.028 20:31:30 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.936 20:31:31 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.936 20:31:31 -- common/autotest_common.sh@1177 -- # local i=0 00:14:13.936 20:31:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.936 20:31:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:13.936 20:31:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:15.846 20:31:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:15.847 20:31:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:15.847 20:31:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.847 20:31:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:15.847 20:31:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.847 20:31:33 -- common/autotest_common.sh@1187 -- # return 0 00:14:15.847 20:31:33 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:15.847 20:31:33 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:15.847 20:31:33 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:15.847 20:31:33 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:15.847 20:31:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:15.847 20:31:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:15.847 20:31:33 -- setup/common.sh@80 -- # echo 536870912 00:14:15.847 20:31:33 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:15.847 20:31:33 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:15.847 20:31:33 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:15.847 20:31:33 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:15.847 20:31:34 -- target/filesystem.sh@69 -- # partprobe 00:14:16.419 20:31:34 -- target/filesystem.sh@70 -- # sleep 1 00:14:17.357 20:31:35 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:17.357 20:31:35 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:17.357 20:31:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:17.357 20:31:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:17.357 20:31:35 -- common/autotest_common.sh@10 -- # set +x 00:14:17.357 ************************************ 00:14:17.357 START TEST filesystem_in_capsule_ext4 00:14:17.357 ************************************ 00:14:17.357 20:31:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:17.357 20:31:35 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:17.357 20:31:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:17.357 20:31:35 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:17.357 20:31:35 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:14:17.357 20:31:35 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:14:17.357 20:31:35 -- common/autotest_common.sh@904 -- # local i=0 00:14:17.357 20:31:35 -- common/autotest_common.sh@905 -- # local force 00:14:17.357 20:31:35 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:14:17.357 20:31:35 -- common/autotest_common.sh@908 -- # force=-F 00:14:17.357 20:31:35 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:17.357 mke2fs 1.46.5 (30-Dec-2021) 00:14:17.357 Discarding device blocks: 0/522240 done 00:14:17.357 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:17.357 Filesystem UUID: 9d0809aa-e082-4b3a-a4fe-8d1983bbc39d 00:14:17.357 Superblock backups stored on blocks: 00:14:17.357 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:17.357 00:14:17.357 Allocating group tables: 0/64 done 00:14:17.357 Writing inode tables: 0/64 done 00:14:18.297 Creating journal (8192 blocks): done 00:14:19.125 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:14:19.125 00:14:19.125 20:31:37 -- common/autotest_common.sh@921 -- # return 0 00:14:19.125 20:31:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:19.386 20:31:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:19.386 20:31:37 -- target/filesystem.sh@25 -- # sync 00:14:19.386 20:31:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:19.386 20:31:37 -- target/filesystem.sh@27 -- # sync 00:14:19.386 20:31:37 -- target/filesystem.sh@29 -- # i=0 00:14:19.386 20:31:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:19.386 20:31:37 -- target/filesystem.sh@37 -- # kill -0 3437895 00:14:19.386 20:31:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:19.386 20:31:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:19.386 20:31:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:19.386 20:31:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:19.386 00:14:19.386 real 0m2.057s 00:14:19.386 user 0m0.017s 00:14:19.386 sys 0m0.046s 00:14:19.386 20:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.386 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:14:19.386 ************************************ 00:14:19.386 END TEST filesystem_in_capsule_ext4 00:14:19.386 ************************************ 00:14:19.386 20:31:37 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:19.386 20:31:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:19.386 20:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.386 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:14:19.386 ************************************ 00:14:19.386 START TEST filesystem_in_capsule_btrfs 00:14:19.386 ************************************ 00:14:19.386 20:31:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:19.386 20:31:37 -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:19.386 20:31:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:19.386 20:31:37 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:19.386 20:31:37 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:14:19.386 20:31:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:14:19.386 20:31:37 -- common/autotest_common.sh@904 -- # local i=0 00:14:19.386 20:31:37 -- common/autotest_common.sh@905 -- # local force 00:14:19.386 20:31:37 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:14:19.386 20:31:37 -- common/autotest_common.sh@910 -- # force=-f 00:14:19.386 20:31:37 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:19.645 btrfs-progs v6.6.2 00:14:19.645 See https://btrfs.readthedocs.io for more information. 00:14:19.645 00:14:19.645 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:19.645 NOTE: several default settings have changed in version 5.15, please make sure 00:14:19.645 this does not affect your deployments: 00:14:19.645 - DUP for metadata (-m dup) 00:14:19.645 - enabled no-holes (-O no-holes) 00:14:19.645 - enabled free-space-tree (-R free-space-tree) 00:14:19.645 00:14:19.645 Label: (null) 00:14:19.645 UUID: 37ddc26d-131c-4751-81e0-595675754d70 00:14:19.645 Node size: 16384 00:14:19.645 Sector size: 4096 00:14:19.645 Filesystem size: 510.00MiB 00:14:19.646 Block group profiles: 00:14:19.646 Data: single 8.00MiB 00:14:19.646 Metadata: DUP 32.00MiB 00:14:19.646 System: DUP 8.00MiB 00:14:19.646 SSD detected: yes 00:14:19.646 Zoned device: no 00:14:19.646 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:19.646 Runtime features: free-space-tree 00:14:19.646 Checksum: crc32c 00:14:19.646 Number of devices: 1 00:14:19.646 Devices: 00:14:19.646 ID SIZE PATH 00:14:19.646 1 510.00MiB /dev/nvme0n1p1 00:14:19.646 00:14:19.646 20:31:37 -- common/autotest_common.sh@921 -- # return 0 00:14:19.646 20:31:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:19.904 20:31:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:19.905 20:31:38 -- target/filesystem.sh@25 -- # sync 00:14:19.905 20:31:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:19.905 20:31:38 -- target/filesystem.sh@27 -- # sync 00:14:19.905 20:31:38 -- target/filesystem.sh@29 -- # i=0 00:14:19.905 20:31:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:19.905 20:31:38 -- target/filesystem.sh@37 -- # kill -0 3437895 00:14:19.905 20:31:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:19.905 20:31:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:19.905 20:31:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:19.905 20:31:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:19.905 00:14:19.905 real 0m0.541s 00:14:19.905 user 0m0.017s 00:14:19.905 sys 0m0.060s 00:14:19.905 20:31:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.905 20:31:38 -- common/autotest_common.sh@10 -- # set +x 00:14:19.905 ************************************ 00:14:19.905 END TEST filesystem_in_capsule_btrfs 00:14:19.905 ************************************ 00:14:19.905 20:31:38 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:19.905 20:31:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:19.905 20:31:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.905 20:31:38 -- common/autotest_common.sh@10 -- # set +x 00:14:19.905 ************************************ 00:14:19.905 START TEST filesystem_in_capsule_xfs 00:14:19.905 ************************************ 00:14:19.905 20:31:38 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:14:19.905 20:31:38 -- target/filesystem.sh@18 -- # fstype=xfs 00:14:19.905 20:31:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:19.905 20:31:38 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:19.905 20:31:38 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:14:19.905 20:31:38 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:14:19.905 20:31:38 -- common/autotest_common.sh@904 -- # local i=0 00:14:19.905 20:31:38 -- common/autotest_common.sh@905 -- # local force 00:14:19.905 20:31:38 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:14:19.905 20:31:38 -- common/autotest_common.sh@910 -- # force=-f 00:14:19.905 20:31:38 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:20.163 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:20.163 = sectsz=512 attr=2, projid32bit=1 00:14:20.163 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:20.163 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:20.163 data = bsize=4096 blocks=130560, imaxpct=25 00:14:20.163 = sunit=0 swidth=0 blks 00:14:20.163 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:20.163 log =internal log bsize=4096 blocks=16384, version=2 00:14:20.163 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:20.163 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:20.731 Discarding blocks...Done. 00:14:20.731 20:31:39 -- common/autotest_common.sh@921 -- # return 0 00:14:20.731 20:31:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:22.715 20:31:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:22.715 20:31:40 -- target/filesystem.sh@25 -- # sync 00:14:22.715 20:31:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:22.715 20:31:40 -- target/filesystem.sh@27 -- # sync 00:14:22.715 20:31:40 -- target/filesystem.sh@29 -- # i=0 00:14:22.715 20:31:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:22.715 20:31:40 -- target/filesystem.sh@37 -- # kill -0 3437895 00:14:22.715 20:31:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:22.715 20:31:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:22.715 20:31:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:22.715 20:31:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:22.715 00:14:22.715 real 0m2.664s 00:14:22.715 user 0m0.029s 00:14:22.715 sys 0m0.041s 00:14:22.715 20:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.715 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.715 ************************************ 00:14:22.715 END TEST filesystem_in_capsule_xfs 00:14:22.715 ************************************ 00:14:22.715 20:31:40 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:22.715 20:31:40 -- target/filesystem.sh@93 -- # sync 00:14:22.715 20:31:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.975 20:31:41 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.975 20:31:41 -- common/autotest_common.sh@1198 -- # local i=0 00:14:22.975 20:31:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:22.975 20:31:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.975 20:31:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:22.975 20:31:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.975 20:31:41 -- common/autotest_common.sh@1210 -- # return 0 00:14:22.975 20:31:41 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.975 20:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.975 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:14:22.975 20:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.975 20:31:41 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:22.975 20:31:41 -- target/filesystem.sh@101 -- # killprocess 3437895 00:14:22.975 20:31:41 -- common/autotest_common.sh@926 -- # '[' -z 3437895 ']' 00:14:22.975 20:31:41 -- common/autotest_common.sh@930 -- # kill -0 3437895 00:14:22.975 20:31:41 -- common/autotest_common.sh@931 -- # uname 00:14:22.975 20:31:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:22.975 20:31:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3437895 00:14:22.975 20:31:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:22.975 20:31:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:22.975 20:31:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3437895' 00:14:22.975 killing process with pid 3437895 00:14:22.975 20:31:41 -- common/autotest_common.sh@945 -- # kill 3437895 00:14:22.975 20:31:41 -- common/autotest_common.sh@950 -- # wait 3437895 00:14:23.912 20:31:42 -- target/filesystem.sh@102 -- # nvmfpid= 00:14:23.912 00:14:23.912 real 0m13.095s 00:14:23.912 user 0m50.487s 00:14:23.912 sys 0m1.021s 00:14:23.912 20:31:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.912 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 ************************************ 00:14:23.912 END TEST nvmf_filesystem_in_capsule 00:14:23.912 ************************************ 00:14:23.912 20:31:42 -- target/filesystem.sh@108 -- # nvmftestfini 00:14:23.912 20:31:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:23.912 20:31:42 -- nvmf/common.sh@116 -- # sync 00:14:23.912 20:31:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:23.912 20:31:42 -- nvmf/common.sh@119 -- # set +e 00:14:23.912 20:31:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:23.912 20:31:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:23.912 rmmod nvme_tcp 00:14:23.912 rmmod nvme_fabrics 00:14:24.172 rmmod nvme_keyring 00:14:24.172 20:31:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:24.172 20:31:42 -- nvmf/common.sh@123 -- # set -e 00:14:24.172 20:31:42 -- nvmf/common.sh@124 -- # return 0 00:14:24.172 20:31:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:14:24.172 20:31:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:24.172 20:31:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:24.172 20:31:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:24.172 20:31:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.172 20:31:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:24.172 20:31:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.172 20:31:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.172 20:31:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.079 20:31:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:26.079 00:14:26.079 real 0m37.655s 00:14:26.079 user 1m54.417s 00:14:26.079 sys 0m6.808s 00:14:26.079 20:31:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.079 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:14:26.079 ************************************ 00:14:26.079 END TEST nvmf_filesystem 00:14:26.079 ************************************ 00:14:26.079 20:31:44 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:26.079 20:31:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:26.079 20:31:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.079 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:14:26.079 ************************************ 00:14:26.079 START TEST nvmf_discovery 00:14:26.079 ************************************ 00:14:26.079 20:31:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:26.340 * Looking for test storage... 00:14:26.340 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:26.340 20:31:44 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.340 20:31:44 -- nvmf/common.sh@7 -- # uname -s 00:14:26.340 20:31:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.340 20:31:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.340 20:31:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.340 20:31:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.340 20:31:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.340 20:31:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.340 20:31:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.340 20:31:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.340 20:31:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.340 20:31:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.340 20:31:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:14:26.340 20:31:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:14:26.340 20:31:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.340 20:31:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.340 20:31:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:26.340 20:31:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:26.340 20:31:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.340 20:31:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.340 20:31:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.341 20:31:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.341 20:31:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.341 20:31:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.341 20:31:44 -- paths/export.sh@5 -- # export PATH 00:14:26.341 20:31:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.341 20:31:44 -- nvmf/common.sh@46 -- # : 0 00:14:26.341 20:31:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:26.341 20:31:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:26.341 20:31:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:26.341 20:31:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.341 20:31:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.341 20:31:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:26.341 20:31:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:26.341 20:31:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:26.341 20:31:44 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:26.341 20:31:44 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:26.341 20:31:44 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:26.341 20:31:44 -- target/discovery.sh@15 -- # hash nvme 00:14:26.341 20:31:44 -- target/discovery.sh@20 -- # nvmftestinit 00:14:26.341 20:31:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:26.341 20:31:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.341 20:31:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:26.341 20:31:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:26.341 20:31:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:26.341 20:31:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.341 20:31:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.341 20:31:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.341 20:31:44 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:26.341 20:31:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:26.341 20:31:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:26.341 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:14:32.924 20:31:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:32.924 20:31:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:32.924 20:31:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:32.924 20:31:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:32.924 20:31:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:32.924 20:31:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:32.924 20:31:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:32.924 20:31:50 -- nvmf/common.sh@294 -- # net_devs=() 00:14:32.924 20:31:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:32.924 20:31:50 -- nvmf/common.sh@295 -- # e810=() 00:14:32.924 20:31:50 -- nvmf/common.sh@295 -- # local -ga e810 00:14:32.924 20:31:50 -- nvmf/common.sh@296 -- # x722=() 00:14:32.924 20:31:50 -- nvmf/common.sh@296 -- # local -ga x722 00:14:32.924 20:31:50 -- nvmf/common.sh@297 -- # mlx=() 00:14:32.924 20:31:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:32.924 20:31:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.924 20:31:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:32.924 20:31:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:32.924 20:31:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:32.924 20:31:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:32.924 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:32.924 20:31:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:32.924 20:31:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:32.924 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:32.924 20:31:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:32.924 20:31:50 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:32.924 20:31:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.924 20:31:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:32.924 20:31:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.924 20:31:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:32.924 Found net devices under 0000:27:00.0: cvl_0_0 00:14:32.924 20:31:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.924 20:31:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:32.924 20:31:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.924 20:31:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:32.924 20:31:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.924 20:31:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:32.924 Found net devices under 0000:27:00.1: cvl_0_1 00:14:32.924 20:31:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.924 20:31:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:32.924 20:31:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:32.924 20:31:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:32.924 20:31:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.924 20:31:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.924 20:31:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.924 20:31:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:32.924 20:31:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.924 20:31:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.924 20:31:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:32.924 20:31:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.924 20:31:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.924 20:31:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:32.924 20:31:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:32.924 20:31:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.924 20:31:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.924 20:31:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.924 20:31:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.924 20:31:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:32.924 20:31:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.924 20:31:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.924 20:31:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.924 20:31:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:32.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:14:32.924 00:14:32.924 --- 10.0.0.2 ping statistics --- 00:14:32.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.924 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:14:32.924 20:31:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:14:32.924 00:14:32.924 --- 10.0.0.1 ping statistics --- 00:14:32.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.924 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:14:32.924 20:31:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.924 20:31:50 -- nvmf/common.sh@410 -- # return 0 00:14:32.924 20:31:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:32.924 20:31:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.924 20:31:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:32.924 20:31:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.924 20:31:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:32.924 20:31:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:32.924 20:31:50 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:32.924 20:31:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:32.924 20:31:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:32.924 20:31:50 -- common/autotest_common.sh@10 -- # set +x 00:14:32.924 20:31:51 -- nvmf/common.sh@469 -- # nvmfpid=3444715 00:14:32.924 20:31:51 -- nvmf/common.sh@470 -- # waitforlisten 3444715 00:14:32.924 20:31:51 -- common/autotest_common.sh@819 -- # '[' -z 3444715 ']' 00:14:32.924 20:31:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.924 20:31:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:32.924 20:31:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:32.924 20:31:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.924 20:31:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:32.924 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.924 [2024-04-26 20:31:51.093749] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:32.924 [2024-04-26 20:31:51.093888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.924 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.924 [2024-04-26 20:31:51.238090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.185 [2024-04-26 20:31:51.334978] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:33.185 [2024-04-26 20:31:51.335182] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.185 [2024-04-26 20:31:51.335196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.185 [2024-04-26 20:31:51.335207] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.185 [2024-04-26 20:31:51.335277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.185 [2024-04-26 20:31:51.335392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.185 [2024-04-26 20:31:51.335496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.185 [2024-04-26 20:31:51.335506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.760 20:31:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:33.760 20:31:51 -- common/autotest_common.sh@852 -- # return 0 00:14:33.760 20:31:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:33.760 20:31:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.760 20:31:51 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 [2024-04-26 20:31:51.846778] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@26 -- # seq 1 4 00:14:33.760 20:31:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:33.760 20:31:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 Null1 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 [2024-04-26 20:31:51.899109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:33.760 20:31:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 Null2 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:33.760 20:31:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 Null3 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:33.760 20:31:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 Null4 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:51 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.760 20:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:52 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:33.760 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.760 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.760 20:31:52 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 4420 00:14:33.760 00:14:33.760 Discovery Log Number of Records 6, Generation counter 6 00:14:33.760 =====Discovery Log Entry 0====== 00:14:33.760 trtype: tcp 00:14:33.760 adrfam: ipv4 00:14:33.761 subtype: current discovery subsystem 00:14:33.761 treq: not required 00:14:33.761 portid: 0 00:14:33.761 trsvcid: 4420 00:14:33.761 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:33.761 traddr: 10.0.0.2 00:14:33.761 eflags: explicit discovery connections, duplicate discovery information 00:14:33.761 sectype: none 00:14:33.761 =====Discovery Log Entry 1====== 00:14:33.761 trtype: tcp 00:14:33.761 adrfam: ipv4 00:14:33.761 subtype: nvme subsystem 00:14:33.761 treq: not required 00:14:33.761 portid: 0 00:14:33.761 trsvcid: 4420 00:14:33.761 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:33.761 traddr: 10.0.0.2 00:14:33.761 eflags: none 00:14:33.761 sectype: none 00:14:33.761 =====Discovery Log Entry 2====== 00:14:33.761 trtype: tcp 00:14:33.761 adrfam: ipv4 00:14:33.761 subtype: nvme subsystem 00:14:33.761 treq: not required 00:14:33.761 portid: 0 00:14:33.761 trsvcid: 4420 00:14:33.761 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:33.761 traddr: 10.0.0.2 00:14:33.761 eflags: none 00:14:33.761 sectype: none 00:14:33.761 =====Discovery Log Entry 3====== 00:14:33.761 trtype: tcp 00:14:33.761 adrfam: ipv4 00:14:33.761 subtype: nvme subsystem 00:14:33.761 treq: not required 00:14:33.761 portid: 0 00:14:33.761 trsvcid: 4420 00:14:33.761 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:33.761 traddr: 10.0.0.2 00:14:33.761 eflags: none 00:14:33.761 sectype: none 00:14:33.761 =====Discovery Log Entry 4====== 00:14:33.761 trtype: tcp 00:14:33.761 adrfam: ipv4 00:14:33.761 subtype: nvme subsystem 00:14:33.761 treq: not required 00:14:33.761 portid: 0 00:14:33.761 trsvcid: 4420 00:14:33.761 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:33.761 traddr: 10.0.0.2 00:14:33.761 eflags: none 00:14:33.761 sectype: none 00:14:33.761 =====Discovery Log Entry 5====== 00:14:33.761 trtype: tcp 00:14:33.761 adrfam: ipv4 00:14:33.761 subtype: discovery subsystem referral 00:14:33.761 treq: not required 00:14:33.761 portid: 0 00:14:33.761 trsvcid: 4430 00:14:33.761 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:33.761 traddr: 10.0.0.2 00:14:33.761 eflags: none 00:14:33.761 sectype: none 00:14:33.761 20:31:52 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:33.761 Perform nvmf subsystem discovery via RPC 00:14:33.761 20:31:52 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:33.761 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.761 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:33.761 [2024-04-26 20:31:52.083051] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:33.761 [ 00:14:33.761 { 00:14:33.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:33.761 "subtype": "Discovery", 00:14:33.761 "listen_addresses": [ 00:14:33.761 { 00:14:33.761 "transport": "TCP", 00:14:33.761 "trtype": "TCP", 00:14:33.761 "adrfam": "IPv4", 00:14:33.761 "traddr": "10.0.0.2", 00:14:33.761 "trsvcid": "4420" 00:14:33.761 } 00:14:33.761 ], 00:14:33.761 "allow_any_host": true, 00:14:33.761 "hosts": [] 00:14:33.761 }, 00:14:33.761 { 00:14:33.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.761 "subtype": "NVMe", 00:14:33.761 "listen_addresses": [ 00:14:33.761 { 00:14:33.761 "transport": "TCP", 00:14:33.761 "trtype": "TCP", 00:14:33.761 "adrfam": "IPv4", 00:14:33.761 "traddr": "10.0.0.2", 00:14:33.761 "trsvcid": "4420" 00:14:33.761 } 00:14:33.761 ], 00:14:33.761 "allow_any_host": true, 00:14:33.761 "hosts": [], 00:14:33.761 "serial_number": "SPDK00000000000001", 00:14:33.761 "model_number": "SPDK bdev Controller", 00:14:33.761 "max_namespaces": 32, 00:14:33.761 "min_cntlid": 1, 00:14:33.761 "max_cntlid": 65519, 00:14:33.761 "namespaces": [ 00:14:33.761 { 00:14:33.761 "nsid": 1, 00:14:33.761 "bdev_name": "Null1", 00:14:33.761 "name": "Null1", 00:14:33.761 "nguid": "68A83E18B3074182A526D1BEA6F13C52", 00:14:33.761 "uuid": "68a83e18-b307-4182-a526-d1bea6f13c52" 00:14:33.761 } 00:14:33.761 ] 00:14:33.761 }, 00:14:33.761 { 00:14:33.761 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:33.761 "subtype": "NVMe", 00:14:33.761 "listen_addresses": [ 00:14:33.761 { 00:14:33.761 "transport": "TCP", 00:14:33.761 "trtype": "TCP", 00:14:33.761 "adrfam": "IPv4", 00:14:33.761 "traddr": "10.0.0.2", 00:14:33.761 "trsvcid": "4420" 00:14:33.761 } 00:14:33.761 ], 00:14:33.761 "allow_any_host": true, 00:14:33.761 "hosts": [], 00:14:33.761 "serial_number": "SPDK00000000000002", 00:14:33.761 "model_number": "SPDK bdev Controller", 00:14:33.761 "max_namespaces": 32, 00:14:33.761 "min_cntlid": 1, 00:14:33.761 "max_cntlid": 65519, 00:14:33.761 "namespaces": [ 00:14:33.761 { 00:14:33.761 "nsid": 1, 00:14:33.761 "bdev_name": "Null2", 00:14:33.761 "name": "Null2", 00:14:33.761 "nguid": "C3B8BA5910564F7ABD6006DBB2586C01", 00:14:33.761 "uuid": "c3b8ba59-1056-4f7a-bd60-06dbb2586c01" 00:14:33.761 } 00:14:33.761 ] 00:14:33.761 }, 00:14:33.761 { 00:14:33.761 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:33.761 "subtype": "NVMe", 00:14:33.761 "listen_addresses": [ 00:14:33.761 { 00:14:33.761 "transport": "TCP", 00:14:33.761 "trtype": "TCP", 00:14:33.761 "adrfam": "IPv4", 00:14:33.761 "traddr": "10.0.0.2", 00:14:33.761 "trsvcid": "4420" 00:14:33.761 } 00:14:33.761 ], 00:14:33.761 "allow_any_host": true, 00:14:33.761 "hosts": [], 00:14:33.761 "serial_number": "SPDK00000000000003", 00:14:33.761 "model_number": "SPDK bdev Controller", 00:14:33.761 "max_namespaces": 32, 00:14:33.761 "min_cntlid": 1, 00:14:33.761 "max_cntlid": 65519, 00:14:33.761 "namespaces": [ 00:14:33.761 { 00:14:33.761 "nsid": 1, 00:14:33.761 "bdev_name": "Null3", 00:14:33.761 "name": "Null3", 00:14:33.761 "nguid": "DB83DF9A22054212A6B7EE36024E0ABC", 00:14:33.761 "uuid": "db83df9a-2205-4212-a6b7-ee36024e0abc" 00:14:33.761 } 00:14:33.761 ] 00:14:33.761 }, 00:14:33.761 { 00:14:33.761 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:33.761 "subtype": "NVMe", 00:14:33.761 "listen_addresses": [ 00:14:33.761 { 00:14:33.761 "transport": "TCP", 00:14:33.761 "trtype": "TCP", 00:14:33.761 "adrfam": "IPv4", 00:14:33.761 "traddr": "10.0.0.2", 00:14:33.761 "trsvcid": "4420" 00:14:33.761 } 00:14:33.761 ], 00:14:33.761 "allow_any_host": true, 00:14:33.761 "hosts": [], 00:14:33.761 "serial_number": "SPDK00000000000004", 00:14:33.761 "model_number": "SPDK bdev Controller", 00:14:33.761 "max_namespaces": 32, 00:14:33.761 "min_cntlid": 1, 00:14:33.761 "max_cntlid": 65519, 00:14:33.761 "namespaces": [ 00:14:33.761 { 00:14:33.761 "nsid": 1, 00:14:33.761 "bdev_name": "Null4", 00:14:33.761 "name": "Null4", 00:14:33.761 "nguid": "87B7F4E5F3A5432D947C96875E3417E5", 00:14:33.761 "uuid": "87b7f4e5-f3a5-432d-947c-96875e3417e5" 00:14:33.761 } 00:14:33.761 ] 00:14:33.761 } 00:14:33.761 ] 00:14:33.761 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.761 20:31:52 -- target/discovery.sh@42 -- # seq 1 4 00:14:34.023 20:31:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:34.023 20:31:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.023 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.023 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.023 20:31:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:34.023 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.023 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.023 20:31:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:34.023 20:31:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:34.023 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.023 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.023 20:31:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:34.023 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.023 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.023 20:31:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:34.023 20:31:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:34.023 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.023 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.023 20:31:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:34.024 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.024 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.024 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.024 20:31:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:34.024 20:31:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:34.024 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.024 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.024 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.024 20:31:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:34.024 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.024 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.024 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.024 20:31:52 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:34.024 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.024 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.024 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.024 20:31:52 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:34.024 20:31:52 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:34.024 20:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.024 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:34.024 20:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.024 20:31:52 -- target/discovery.sh@49 -- # check_bdevs= 00:14:34.024 20:31:52 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:34.024 20:31:52 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:34.024 20:31:52 -- target/discovery.sh@57 -- # nvmftestfini 00:14:34.024 20:31:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:34.024 20:31:52 -- nvmf/common.sh@116 -- # sync 00:14:34.024 20:31:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:34.024 20:31:52 -- nvmf/common.sh@119 -- # set +e 00:14:34.024 20:31:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:34.024 20:31:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:34.024 rmmod nvme_tcp 00:14:34.024 rmmod nvme_fabrics 00:14:34.024 rmmod nvme_keyring 00:14:34.024 20:31:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:34.024 20:31:52 -- nvmf/common.sh@123 -- # set -e 00:14:34.024 20:31:52 -- nvmf/common.sh@124 -- # return 0 00:14:34.024 20:31:52 -- nvmf/common.sh@477 -- # '[' -n 3444715 ']' 00:14:34.024 20:31:52 -- nvmf/common.sh@478 -- # killprocess 3444715 00:14:34.024 20:31:52 -- common/autotest_common.sh@926 -- # '[' -z 3444715 ']' 00:14:34.024 20:31:52 -- common/autotest_common.sh@930 -- # kill -0 3444715 00:14:34.024 20:31:52 -- common/autotest_common.sh@931 -- # uname 00:14:34.024 20:31:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:34.024 20:31:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3444715 00:14:34.024 20:31:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:34.024 20:31:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:34.024 20:31:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3444715' 00:14:34.024 killing process with pid 3444715 00:14:34.024 20:31:52 -- common/autotest_common.sh@945 -- # kill 3444715 00:14:34.024 [2024-04-26 20:31:52.331702] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:34.024 20:31:52 -- common/autotest_common.sh@950 -- # wait 3444715 00:14:34.597 20:31:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:34.597 20:31:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:34.597 20:31:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:34.597 20:31:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.597 20:31:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:34.597 20:31:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.597 20:31:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.597 20:31:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.530 20:31:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:36.530 00:14:36.530 real 0m10.472s 00:14:36.530 user 0m7.215s 00:14:36.530 sys 0m5.332s 00:14:36.530 20:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.530 20:31:54 -- common/autotest_common.sh@10 -- # set +x 00:14:36.530 ************************************ 00:14:36.530 END TEST nvmf_discovery 00:14:36.530 ************************************ 00:14:36.828 20:31:54 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:36.828 20:31:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:36.828 20:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:36.828 20:31:54 -- common/autotest_common.sh@10 -- # set +x 00:14:36.828 ************************************ 00:14:36.828 START TEST nvmf_referrals 00:14:36.828 ************************************ 00:14:36.828 20:31:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:36.828 * Looking for test storage... 00:14:36.828 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:36.828 20:31:54 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.828 20:31:54 -- nvmf/common.sh@7 -- # uname -s 00:14:36.828 20:31:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.828 20:31:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.828 20:31:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.828 20:31:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.829 20:31:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.829 20:31:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.829 20:31:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.829 20:31:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.829 20:31:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.829 20:31:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.829 20:31:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:14:36.829 20:31:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:14:36.829 20:31:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.829 20:31:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.829 20:31:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:36.829 20:31:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:36.829 20:31:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.829 20:31:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.829 20:31:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.829 20:31:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.829 20:31:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.829 20:31:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.829 20:31:54 -- paths/export.sh@5 -- # export PATH 00:14:36.829 20:31:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.829 20:31:54 -- nvmf/common.sh@46 -- # : 0 00:14:36.829 20:31:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:36.829 20:31:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:36.829 20:31:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:36.829 20:31:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.829 20:31:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.829 20:31:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:36.829 20:31:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:36.829 20:31:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:36.829 20:31:54 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:36.829 20:31:54 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:36.829 20:31:54 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:36.829 20:31:54 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:36.829 20:31:54 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:36.829 20:31:54 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:36.829 20:31:54 -- target/referrals.sh@37 -- # nvmftestinit 00:14:36.829 20:31:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:36.829 20:31:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.829 20:31:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:36.829 20:31:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:36.829 20:31:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:36.829 20:31:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.829 20:31:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.829 20:31:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.829 20:31:54 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:36.829 20:31:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:36.829 20:31:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:36.829 20:31:54 -- common/autotest_common.sh@10 -- # set +x 00:14:43.425 20:32:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:43.425 20:32:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:43.425 20:32:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:43.425 20:32:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:43.425 20:32:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:43.425 20:32:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:43.425 20:32:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:43.425 20:32:00 -- nvmf/common.sh@294 -- # net_devs=() 00:14:43.425 20:32:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:43.425 20:32:00 -- nvmf/common.sh@295 -- # e810=() 00:14:43.425 20:32:00 -- nvmf/common.sh@295 -- # local -ga e810 00:14:43.425 20:32:00 -- nvmf/common.sh@296 -- # x722=() 00:14:43.425 20:32:00 -- nvmf/common.sh@296 -- # local -ga x722 00:14:43.425 20:32:00 -- nvmf/common.sh@297 -- # mlx=() 00:14:43.425 20:32:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:43.425 20:32:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.425 20:32:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:43.425 20:32:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:43.425 20:32:00 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:43.425 20:32:00 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:43.425 20:32:00 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:43.425 20:32:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:43.425 20:32:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:43.425 20:32:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:43.425 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:43.425 20:32:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:43.426 20:32:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:43.426 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:43.426 20:32:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:43.426 20:32:00 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:43.426 20:32:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.426 20:32:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:43.426 20:32:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.426 20:32:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:43.426 Found net devices under 0000:27:00.0: cvl_0_0 00:14:43.426 20:32:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.426 20:32:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:43.426 20:32:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.426 20:32:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:43.426 20:32:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.426 20:32:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:43.426 Found net devices under 0000:27:00.1: cvl_0_1 00:14:43.426 20:32:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.426 20:32:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:43.426 20:32:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:43.426 20:32:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:43.426 20:32:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:43.426 20:32:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.426 20:32:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.426 20:32:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.426 20:32:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:43.426 20:32:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.426 20:32:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.426 20:32:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:43.426 20:32:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.426 20:32:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.426 20:32:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:43.426 20:32:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:43.426 20:32:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.426 20:32:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.426 20:32:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.426 20:32:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.426 20:32:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:43.426 20:32:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.426 20:32:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.426 20:32:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.426 20:32:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:43.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:14:43.426 00:14:43.426 --- 10.0.0.2 ping statistics --- 00:14:43.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.426 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:14:43.426 20:32:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:14:43.426 00:14:43.426 --- 10.0.0.1 ping statistics --- 00:14:43.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.426 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:43.426 20:32:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.426 20:32:01 -- nvmf/common.sh@410 -- # return 0 00:14:43.426 20:32:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:43.426 20:32:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.426 20:32:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:43.426 20:32:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:43.426 20:32:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.426 20:32:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:43.426 20:32:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:43.426 20:32:01 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:43.426 20:32:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:43.426 20:32:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:43.426 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.426 20:32:01 -- nvmf/common.sh@469 -- # nvmfpid=3448961 00:14:43.426 20:32:01 -- nvmf/common.sh@470 -- # waitforlisten 3448961 00:14:43.426 20:32:01 -- common/autotest_common.sh@819 -- # '[' -z 3448961 ']' 00:14:43.426 20:32:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.426 20:32:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:43.426 20:32:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.426 20:32:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:43.426 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.426 20:32:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.426 [2024-04-26 20:32:01.163833] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:43.426 [2024-04-26 20:32:01.163945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.426 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.426 [2024-04-26 20:32:01.289071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.426 [2024-04-26 20:32:01.383637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:43.426 [2024-04-26 20:32:01.383808] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.426 [2024-04-26 20:32:01.383821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.426 [2024-04-26 20:32:01.383831] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.426 [2024-04-26 20:32:01.383912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.426 [2024-04-26 20:32:01.384013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.426 [2024-04-26 20:32:01.384134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.427 [2024-04-26 20:32:01.384145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.685 20:32:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:43.685 20:32:01 -- common/autotest_common.sh@852 -- # return 0 00:14:43.685 20:32:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:43.685 20:32:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:43.685 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 20:32:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.685 20:32:01 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.685 20:32:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.685 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 [2024-04-26 20:32:01.887661] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.685 20:32:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.685 20:32:01 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:43.685 20:32:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.685 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 [2024-04-26 20:32:01.899861] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:43.685 20:32:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.685 20:32:01 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:43.685 20:32:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.685 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 20:32:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.685 20:32:01 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:43.685 20:32:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.685 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 20:32:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.685 20:32:01 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:43.685 20:32:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.685 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 20:32:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.685 20:32:01 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.685 20:32:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.685 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 20:32:01 -- target/referrals.sh@48 -- # jq length 00:14:43.685 20:32:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.685 20:32:01 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:43.685 20:32:01 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:43.685 20:32:01 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:43.685 20:32:01 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.685 20:32:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.685 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.685 20:32:01 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:43.685 20:32:01 -- target/referrals.sh@21 -- # sort 00:14:43.685 20:32:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.685 20:32:01 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:43.685 20:32:02 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:43.685 20:32:02 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:43.685 20:32:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:43.685 20:32:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:43.685 20:32:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.685 20:32:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:43.685 20:32:02 -- target/referrals.sh@26 -- # sort 00:14:43.945 20:32:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:43.945 20:32:02 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:43.945 20:32:02 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:43.945 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.945 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.945 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.945 20:32:02 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:43.945 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.945 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.945 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.945 20:32:02 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:43.945 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.945 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.945 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.945 20:32:02 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.945 20:32:02 -- target/referrals.sh@56 -- # jq length 00:14:43.945 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.945 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.945 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.945 20:32:02 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:43.945 20:32:02 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:43.945 20:32:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:43.945 20:32:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:43.945 20:32:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.945 20:32:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:43.945 20:32:02 -- target/referrals.sh@26 -- # sort 00:14:43.945 20:32:02 -- target/referrals.sh@26 -- # echo 00:14:43.945 20:32:02 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:43.945 20:32:02 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:43.945 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.945 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.945 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.945 20:32:02 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:43.945 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.945 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.945 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.945 20:32:02 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:43.945 20:32:02 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:43.945 20:32:02 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.945 20:32:02 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:43.945 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.945 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.945 20:32:02 -- target/referrals.sh@21 -- # sort 00:14:43.946 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.206 20:32:02 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:44.206 20:32:02 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:44.206 20:32:02 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:44.206 20:32:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:44.206 20:32:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:44.206 20:32:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.206 20:32:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:44.206 20:32:02 -- target/referrals.sh@26 -- # sort 00:14:44.206 20:32:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:44.206 20:32:02 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:44.206 20:32:02 -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:44.206 20:32:02 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:44.206 20:32:02 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:44.206 20:32:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.206 20:32:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:44.206 20:32:02 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:44.206 20:32:02 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:44.206 20:32:02 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:44.206 20:32:02 -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:44.206 20:32:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.206 20:32:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:44.206 20:32:02 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:44.207 20:32:02 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:44.207 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.207 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.207 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.207 20:32:02 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:44.207 20:32:02 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:44.207 20:32:02 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:44.207 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.207 20:32:02 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:44.207 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.207 20:32:02 -- target/referrals.sh@21 -- # sort 00:14:44.207 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.466 20:32:02 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:44.466 20:32:02 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:44.466 20:32:02 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:44.466 20:32:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:44.466 20:32:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:44.466 20:32:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.466 20:32:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:44.466 20:32:02 -- target/referrals.sh@26 -- # sort 00:14:44.466 20:32:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:44.466 20:32:02 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:44.466 20:32:02 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:44.466 20:32:02 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:44.466 20:32:02 -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:44.466 20:32:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.466 20:32:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:44.466 20:32:02 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:44.466 20:32:02 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:44.466 20:32:02 -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:44.466 20:32:02 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:44.466 20:32:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.466 20:32:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:44.725 20:32:02 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:44.725 20:32:02 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:44.725 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.725 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.725 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.725 20:32:02 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:44.725 20:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.725 20:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.725 20:32:02 -- target/referrals.sh@82 -- # jq length 00:14:44.725 20:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.725 20:32:02 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:44.725 20:32:02 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:44.725 20:32:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:44.725 20:32:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:44.725 20:32:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.725 20:32:02 -- target/referrals.sh@26 -- # sort 00:14:44.725 20:32:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:44.725 20:32:03 -- target/referrals.sh@26 -- # echo 00:14:44.725 20:32:03 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:44.725 20:32:03 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:44.725 20:32:03 -- target/referrals.sh@86 -- # nvmftestfini 00:14:44.725 20:32:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:44.725 20:32:03 -- nvmf/common.sh@116 -- # sync 00:14:44.725 20:32:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:44.725 20:32:03 -- nvmf/common.sh@119 -- # set +e 00:14:44.725 20:32:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:44.725 20:32:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:44.725 rmmod nvme_tcp 00:14:44.725 rmmod nvme_fabrics 00:14:44.725 rmmod nvme_keyring 00:14:44.985 20:32:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:44.985 20:32:03 -- nvmf/common.sh@123 -- # set -e 00:14:44.985 20:32:03 -- nvmf/common.sh@124 -- # return 0 00:14:44.985 20:32:03 -- nvmf/common.sh@477 -- # '[' -n 3448961 ']' 00:14:44.985 20:32:03 -- nvmf/common.sh@478 -- # killprocess 3448961 00:14:44.985 20:32:03 -- common/autotest_common.sh@926 -- # '[' -z 3448961 ']' 00:14:44.985 20:32:03 -- common/autotest_common.sh@930 -- # kill -0 3448961 00:14:44.985 20:32:03 -- common/autotest_common.sh@931 -- # uname 00:14:44.985 20:32:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:44.985 20:32:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3448961 00:14:44.985 20:32:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:44.985 20:32:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:44.985 20:32:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3448961' 00:14:44.985 killing process with pid 3448961 00:14:44.985 20:32:03 -- common/autotest_common.sh@945 -- # kill 3448961 00:14:44.985 20:32:03 -- common/autotest_common.sh@950 -- # wait 3448961 00:14:45.557 20:32:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:45.557 20:32:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:45.557 20:32:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:45.557 20:32:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.557 20:32:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:45.557 20:32:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.557 20:32:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.557 20:32:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.471 20:32:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:47.471 00:14:47.471 real 0m10.763s 00:14:47.471 user 0m10.661s 00:14:47.471 sys 0m5.079s 00:14:47.471 20:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.471 20:32:05 -- common/autotest_common.sh@10 -- # set +x 00:14:47.471 ************************************ 00:14:47.471 END TEST nvmf_referrals 00:14:47.471 ************************************ 00:14:47.471 20:32:05 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:47.471 20:32:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:47.471 20:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:47.471 20:32:05 -- common/autotest_common.sh@10 -- # set +x 00:14:47.471 ************************************ 00:14:47.471 START TEST nvmf_connect_disconnect 00:14:47.471 ************************************ 00:14:47.471 20:32:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:47.471 * Looking for test storage... 00:14:47.471 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:47.471 20:32:05 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.471 20:32:05 -- nvmf/common.sh@7 -- # uname -s 00:14:47.471 20:32:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.471 20:32:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.471 20:32:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.471 20:32:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.471 20:32:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.471 20:32:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.471 20:32:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.471 20:32:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.471 20:32:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.471 20:32:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.471 20:32:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:14:47.471 20:32:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:14:47.471 20:32:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.471 20:32:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.471 20:32:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:47.471 20:32:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:47.471 20:32:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.471 20:32:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.471 20:32:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.471 20:32:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.471 20:32:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.471 20:32:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.471 20:32:05 -- paths/export.sh@5 -- # export PATH 00:14:47.471 20:32:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.471 20:32:05 -- nvmf/common.sh@46 -- # : 0 00:14:47.471 20:32:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:47.471 20:32:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:47.471 20:32:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:47.471 20:32:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.471 20:32:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.471 20:32:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:47.471 20:32:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:47.471 20:32:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:47.471 20:32:05 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.471 20:32:05 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.471 20:32:05 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:47.471 20:32:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:47.471 20:32:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.471 20:32:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:47.471 20:32:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:47.471 20:32:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:47.471 20:32:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.471 20:32:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.471 20:32:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.471 20:32:05 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:47.471 20:32:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:47.471 20:32:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:47.471 20:32:05 -- common/autotest_common.sh@10 -- # set +x 00:14:52.759 20:32:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:52.759 20:32:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:52.759 20:32:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:52.759 20:32:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:52.759 20:32:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:52.759 20:32:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:52.759 20:32:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:52.759 20:32:10 -- nvmf/common.sh@294 -- # net_devs=() 00:14:52.759 20:32:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:52.759 20:32:10 -- nvmf/common.sh@295 -- # e810=() 00:14:52.759 20:32:10 -- nvmf/common.sh@295 -- # local -ga e810 00:14:52.759 20:32:10 -- nvmf/common.sh@296 -- # x722=() 00:14:52.759 20:32:10 -- nvmf/common.sh@296 -- # local -ga x722 00:14:52.759 20:32:10 -- nvmf/common.sh@297 -- # mlx=() 00:14:52.759 20:32:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:52.759 20:32:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.759 20:32:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:52.759 20:32:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:52.759 20:32:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:52.759 20:32:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:52.759 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:52.759 20:32:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:52.759 20:32:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:52.759 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:52.759 20:32:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:52.759 20:32:10 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:52.759 20:32:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.759 20:32:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:52.759 20:32:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.759 20:32:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:52.759 Found net devices under 0000:27:00.0: cvl_0_0 00:14:52.759 20:32:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.759 20:32:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:52.759 20:32:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.759 20:32:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:52.759 20:32:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.759 20:32:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:52.759 Found net devices under 0000:27:00.1: cvl_0_1 00:14:52.759 20:32:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.759 20:32:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:52.759 20:32:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:52.759 20:32:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:52.759 20:32:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.759 20:32:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.759 20:32:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.759 20:32:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:52.759 20:32:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.759 20:32:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.759 20:32:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:52.759 20:32:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.759 20:32:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.759 20:32:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:52.759 20:32:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:52.759 20:32:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.759 20:32:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.759 20:32:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.759 20:32:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.759 20:32:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:52.759 20:32:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.759 20:32:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.759 20:32:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.759 20:32:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:52.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:14:52.759 00:14:52.759 --- 10.0.0.2 ping statistics --- 00:14:52.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.759 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:14:52.759 20:32:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.418 ms 00:14:52.759 00:14:52.759 --- 10.0.0.1 ping statistics --- 00:14:52.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.759 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:14:52.759 20:32:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.759 20:32:10 -- nvmf/common.sh@410 -- # return 0 00:14:52.759 20:32:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:52.759 20:32:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.759 20:32:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:52.759 20:32:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.759 20:32:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:52.759 20:32:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:52.759 20:32:10 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:52.759 20:32:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:52.759 20:32:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:52.759 20:32:10 -- common/autotest_common.sh@10 -- # set +x 00:14:52.759 20:32:10 -- nvmf/common.sh@469 -- # nvmfpid=3453545 00:14:52.759 20:32:10 -- nvmf/common.sh@470 -- # waitforlisten 3453545 00:14:52.759 20:32:10 -- common/autotest_common.sh@819 -- # '[' -z 3453545 ']' 00:14:52.759 20:32:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.759 20:32:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:52.759 20:32:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.759 20:32:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:52.759 20:32:10 -- common/autotest_common.sh@10 -- # set +x 00:14:52.759 20:32:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:52.759 [2024-04-26 20:32:11.021706] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:52.759 [2024-04-26 20:32:11.021810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.759 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.019 [2024-04-26 20:32:11.141971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.019 [2024-04-26 20:32:11.234863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.019 [2024-04-26 20:32:11.235033] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.019 [2024-04-26 20:32:11.235046] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.019 [2024-04-26 20:32:11.235054] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.019 [2024-04-26 20:32:11.235210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.019 [2024-04-26 20:32:11.235232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.019 [2024-04-26 20:32:11.235335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.019 [2024-04-26 20:32:11.235345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.591 20:32:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:53.591 20:32:11 -- common/autotest_common.sh@852 -- # return 0 00:14:53.591 20:32:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:53.591 20:32:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:53.591 20:32:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.591 20:32:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:53.592 20:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.592 20:32:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.592 [2024-04-26 20:32:11.782985] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.592 20:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:53.592 20:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.592 20:32:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.592 20:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:53.592 20:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.592 20:32:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.592 20:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.592 20:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.592 20:32:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.592 20:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.592 20:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.592 20:32:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.592 [2024-04-26 20:32:11.851611] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.592 20:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:53.592 20:32:11 -- target/connect_disconnect.sh@34 -- # set +x 00:14:56.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:39.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.549 20:36:01 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:43.549 20:36:01 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:43.549 20:36:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:43.549 20:36:01 -- nvmf/common.sh@116 -- # sync 00:18:43.549 20:36:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:43.549 20:36:01 -- nvmf/common.sh@119 -- # set +e 00:18:43.549 20:36:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:43.549 20:36:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:43.549 rmmod nvme_tcp 00:18:43.549 rmmod nvme_fabrics 00:18:43.549 rmmod nvme_keyring 00:18:43.549 20:36:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:43.549 20:36:01 -- nvmf/common.sh@123 -- # set -e 00:18:43.549 20:36:01 -- nvmf/common.sh@124 -- # return 0 00:18:43.549 20:36:01 -- nvmf/common.sh@477 -- # '[' -n 3453545 ']' 00:18:43.549 20:36:01 -- nvmf/common.sh@478 -- # killprocess 3453545 00:18:43.549 20:36:01 -- common/autotest_common.sh@926 -- # '[' -z 3453545 ']' 00:18:43.549 20:36:01 -- common/autotest_common.sh@930 -- # kill -0 3453545 00:18:43.549 20:36:01 -- common/autotest_common.sh@931 -- # uname 00:18:43.549 20:36:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:43.549 20:36:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3453545 00:18:43.549 20:36:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:43.549 20:36:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:43.549 20:36:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3453545' 00:18:43.549 killing process with pid 3453545 00:18:43.549 20:36:01 -- common/autotest_common.sh@945 -- # kill 3453545 00:18:43.549 20:36:01 -- common/autotest_common.sh@950 -- # wait 3453545 00:18:44.120 20:36:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:44.120 20:36:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:44.120 20:36:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:44.120 20:36:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.120 20:36:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:44.120 20:36:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.120 20:36:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.120 20:36:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.032 20:36:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:46.032 00:18:46.032 real 3m58.586s 00:18:46.032 user 15m18.461s 00:18:46.032 sys 0m13.044s 00:18:46.032 20:36:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.032 20:36:04 -- common/autotest_common.sh@10 -- # set +x 00:18:46.032 ************************************ 00:18:46.032 END TEST nvmf_connect_disconnect 00:18:46.032 ************************************ 00:18:46.032 20:36:04 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:46.032 20:36:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:46.032 20:36:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:46.032 20:36:04 -- common/autotest_common.sh@10 -- # set +x 00:18:46.032 ************************************ 00:18:46.032 START TEST nvmf_multitarget 00:18:46.032 ************************************ 00:18:46.032 20:36:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:46.291 * Looking for test storage... 00:18:46.291 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:46.291 20:36:04 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.291 20:36:04 -- nvmf/common.sh@7 -- # uname -s 00:18:46.291 20:36:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.291 20:36:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.291 20:36:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.291 20:36:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.291 20:36:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.291 20:36:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.291 20:36:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.291 20:36:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.291 20:36:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.291 20:36:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.291 20:36:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:46.291 20:36:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:18:46.291 20:36:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.291 20:36:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.291 20:36:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:46.291 20:36:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:46.291 20:36:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.291 20:36:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.291 20:36:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.291 20:36:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.291 20:36:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.291 20:36:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.291 20:36:04 -- paths/export.sh@5 -- # export PATH 00:18:46.291 20:36:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.291 20:36:04 -- nvmf/common.sh@46 -- # : 0 00:18:46.291 20:36:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:46.291 20:36:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:46.291 20:36:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:46.291 20:36:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.291 20:36:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.291 20:36:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:46.291 20:36:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:46.291 20:36:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:46.291 20:36:04 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:46.291 20:36:04 -- target/multitarget.sh@15 -- # nvmftestinit 00:18:46.291 20:36:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:46.291 20:36:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.291 20:36:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:46.291 20:36:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:46.291 20:36:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:46.291 20:36:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.291 20:36:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.291 20:36:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.291 20:36:04 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:46.291 20:36:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:46.291 20:36:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:46.291 20:36:04 -- common/autotest_common.sh@10 -- # set +x 00:18:51.561 20:36:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:51.561 20:36:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:51.561 20:36:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:51.561 20:36:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:51.561 20:36:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:51.561 20:36:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:51.561 20:36:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:51.561 20:36:09 -- nvmf/common.sh@294 -- # net_devs=() 00:18:51.561 20:36:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:51.561 20:36:09 -- nvmf/common.sh@295 -- # e810=() 00:18:51.561 20:36:09 -- nvmf/common.sh@295 -- # local -ga e810 00:18:51.561 20:36:09 -- nvmf/common.sh@296 -- # x722=() 00:18:51.561 20:36:09 -- nvmf/common.sh@296 -- # local -ga x722 00:18:51.561 20:36:09 -- nvmf/common.sh@297 -- # mlx=() 00:18:51.561 20:36:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:51.561 20:36:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.561 20:36:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:51.561 20:36:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:51.561 20:36:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:51.561 20:36:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:51.561 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:51.561 20:36:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:51.561 20:36:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:51.561 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:51.561 20:36:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:51.561 20:36:09 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:51.561 20:36:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.561 20:36:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:51.561 20:36:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.561 20:36:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:51.561 Found net devices under 0000:27:00.0: cvl_0_0 00:18:51.561 20:36:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.561 20:36:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:51.561 20:36:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.561 20:36:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:51.561 20:36:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.561 20:36:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:51.561 Found net devices under 0000:27:00.1: cvl_0_1 00:18:51.561 20:36:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.561 20:36:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:51.561 20:36:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:51.561 20:36:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:51.561 20:36:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:51.561 20:36:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.561 20:36:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.561 20:36:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.561 20:36:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:51.561 20:36:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.561 20:36:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.561 20:36:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:51.561 20:36:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.561 20:36:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.561 20:36:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:51.561 20:36:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:51.561 20:36:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.561 20:36:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.561 20:36:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.561 20:36:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.561 20:36:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:51.561 20:36:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.561 20:36:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.561 20:36:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.561 20:36:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:51.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.815 ms 00:18:51.562 00:18:51.562 --- 10.0.0.2 ping statistics --- 00:18:51.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.562 rtt min/avg/max/mdev = 0.815/0.815/0.815/0.000 ms 00:18:51.562 20:36:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:18:51.562 00:18:51.562 --- 10.0.0.1 ping statistics --- 00:18:51.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.562 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:18:51.562 20:36:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.562 20:36:09 -- nvmf/common.sh@410 -- # return 0 00:18:51.562 20:36:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:51.562 20:36:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.562 20:36:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:51.562 20:36:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:51.562 20:36:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.562 20:36:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:51.562 20:36:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:51.562 20:36:09 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:51.562 20:36:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:51.562 20:36:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:51.562 20:36:09 -- common/autotest_common.sh@10 -- # set +x 00:18:51.562 20:36:09 -- nvmf/common.sh@469 -- # nvmfpid=3504467 00:18:51.562 20:36:09 -- nvmf/common.sh@470 -- # waitforlisten 3504467 00:18:51.562 20:36:09 -- common/autotest_common.sh@819 -- # '[' -z 3504467 ']' 00:18:51.562 20:36:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.562 20:36:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:51.562 20:36:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.562 20:36:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:51.562 20:36:09 -- common/autotest_common.sh@10 -- # set +x 00:18:51.562 20:36:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:51.562 [2024-04-26 20:36:09.799631] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:51.562 [2024-04-26 20:36:09.799737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.562 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.819 [2024-04-26 20:36:09.919695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.819 [2024-04-26 20:36:10.017387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:51.819 [2024-04-26 20:36:10.017583] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.819 [2024-04-26 20:36:10.017598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.819 [2024-04-26 20:36:10.017609] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.819 [2024-04-26 20:36:10.017683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.819 [2024-04-26 20:36:10.017719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.819 [2024-04-26 20:36:10.017745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.819 [2024-04-26 20:36:10.017755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.386 20:36:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:52.386 20:36:10 -- common/autotest_common.sh@852 -- # return 0 00:18:52.386 20:36:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:52.386 20:36:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:52.386 20:36:10 -- common/autotest_common.sh@10 -- # set +x 00:18:52.386 20:36:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.386 20:36:10 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:52.386 20:36:10 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:52.386 20:36:10 -- target/multitarget.sh@21 -- # jq length 00:18:52.386 20:36:10 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:52.386 20:36:10 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:52.386 "nvmf_tgt_1" 00:18:52.386 20:36:10 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:52.646 "nvmf_tgt_2" 00:18:52.646 20:36:10 -- target/multitarget.sh@28 -- # jq length 00:18:52.646 20:36:10 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:52.646 20:36:10 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:52.646 20:36:10 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:52.646 true 00:18:52.905 20:36:10 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:52.905 true 00:18:52.905 20:36:11 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:52.905 20:36:11 -- target/multitarget.sh@35 -- # jq length 00:18:52.905 20:36:11 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:52.905 20:36:11 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:52.905 20:36:11 -- target/multitarget.sh@41 -- # nvmftestfini 00:18:52.905 20:36:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:52.905 20:36:11 -- nvmf/common.sh@116 -- # sync 00:18:52.905 20:36:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:52.905 20:36:11 -- nvmf/common.sh@119 -- # set +e 00:18:52.905 20:36:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:52.905 20:36:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:52.905 rmmod nvme_tcp 00:18:52.905 rmmod nvme_fabrics 00:18:52.905 rmmod nvme_keyring 00:18:52.905 20:36:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:52.905 20:36:11 -- nvmf/common.sh@123 -- # set -e 00:18:52.905 20:36:11 -- nvmf/common.sh@124 -- # return 0 00:18:52.905 20:36:11 -- nvmf/common.sh@477 -- # '[' -n 3504467 ']' 00:18:52.905 20:36:11 -- nvmf/common.sh@478 -- # killprocess 3504467 00:18:52.905 20:36:11 -- common/autotest_common.sh@926 -- # '[' -z 3504467 ']' 00:18:52.905 20:36:11 -- common/autotest_common.sh@930 -- # kill -0 3504467 00:18:53.188 20:36:11 -- common/autotest_common.sh@931 -- # uname 00:18:53.188 20:36:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:53.188 20:36:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3504467 00:18:53.188 20:36:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:53.188 20:36:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:53.188 20:36:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3504467' 00:18:53.188 killing process with pid 3504467 00:18:53.188 20:36:11 -- common/autotest_common.sh@945 -- # kill 3504467 00:18:53.188 20:36:11 -- common/autotest_common.sh@950 -- # wait 3504467 00:18:53.446 20:36:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:53.446 20:36:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:53.446 20:36:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:53.446 20:36:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.446 20:36:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:53.446 20:36:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.446 20:36:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.446 20:36:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.980 20:36:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:55.980 00:18:55.980 real 0m9.481s 00:18:55.980 user 0m8.708s 00:18:55.980 sys 0m4.379s 00:18:55.980 20:36:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.980 20:36:13 -- common/autotest_common.sh@10 -- # set +x 00:18:55.980 ************************************ 00:18:55.980 END TEST nvmf_multitarget 00:18:55.980 ************************************ 00:18:55.980 20:36:13 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:55.980 20:36:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:55.980 20:36:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:55.980 20:36:13 -- common/autotest_common.sh@10 -- # set +x 00:18:55.980 ************************************ 00:18:55.980 START TEST nvmf_rpc 00:18:55.980 ************************************ 00:18:55.980 20:36:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:55.980 * Looking for test storage... 00:18:55.980 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:55.980 20:36:13 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.980 20:36:13 -- nvmf/common.sh@7 -- # uname -s 00:18:55.980 20:36:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.980 20:36:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.980 20:36:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.980 20:36:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.980 20:36:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.980 20:36:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.980 20:36:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.980 20:36:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.980 20:36:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.980 20:36:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.980 20:36:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:55.980 20:36:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:18:55.980 20:36:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.980 20:36:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.980 20:36:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:55.981 20:36:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:55.981 20:36:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.981 20:36:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.981 20:36:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.981 20:36:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.981 20:36:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.981 20:36:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.981 20:36:13 -- paths/export.sh@5 -- # export PATH 00:18:55.981 20:36:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.981 20:36:13 -- nvmf/common.sh@46 -- # : 0 00:18:55.981 20:36:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:55.981 20:36:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:55.981 20:36:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:55.981 20:36:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.981 20:36:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.981 20:36:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:55.981 20:36:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:55.981 20:36:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:55.981 20:36:13 -- target/rpc.sh@11 -- # loops=5 00:18:55.981 20:36:13 -- target/rpc.sh@23 -- # nvmftestinit 00:18:55.981 20:36:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:55.981 20:36:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.981 20:36:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:55.981 20:36:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:55.981 20:36:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:55.981 20:36:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.981 20:36:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.981 20:36:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.981 20:36:13 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:55.981 20:36:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:55.981 20:36:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:55.981 20:36:13 -- common/autotest_common.sh@10 -- # set +x 00:19:01.257 20:36:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:01.257 20:36:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:01.257 20:36:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:01.257 20:36:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:01.257 20:36:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:01.257 20:36:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:01.257 20:36:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:01.257 20:36:19 -- nvmf/common.sh@294 -- # net_devs=() 00:19:01.257 20:36:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:01.257 20:36:19 -- nvmf/common.sh@295 -- # e810=() 00:19:01.257 20:36:19 -- nvmf/common.sh@295 -- # local -ga e810 00:19:01.257 20:36:19 -- nvmf/common.sh@296 -- # x722=() 00:19:01.257 20:36:19 -- nvmf/common.sh@296 -- # local -ga x722 00:19:01.257 20:36:19 -- nvmf/common.sh@297 -- # mlx=() 00:19:01.257 20:36:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:01.257 20:36:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.257 20:36:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:01.257 20:36:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:01.257 20:36:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:01.257 20:36:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:01.257 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:01.257 20:36:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:01.257 20:36:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:01.257 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:01.257 20:36:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:01.257 20:36:19 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:01.257 20:36:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.257 20:36:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:01.257 20:36:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.257 20:36:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:01.257 Found net devices under 0000:27:00.0: cvl_0_0 00:19:01.257 20:36:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.257 20:36:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:01.257 20:36:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.257 20:36:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:01.257 20:36:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.257 20:36:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:01.257 Found net devices under 0000:27:00.1: cvl_0_1 00:19:01.257 20:36:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.257 20:36:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:01.257 20:36:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:01.257 20:36:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:01.257 20:36:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:01.257 20:36:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.257 20:36:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.257 20:36:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.257 20:36:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:01.257 20:36:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.257 20:36:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.257 20:36:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:01.257 20:36:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.257 20:36:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.258 20:36:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:01.258 20:36:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:01.258 20:36:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.258 20:36:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.258 20:36:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.258 20:36:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.258 20:36:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:01.258 20:36:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.258 20:36:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.258 20:36:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.258 20:36:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:01.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:19:01.258 00:19:01.258 --- 10.0.0.2 ping statistics --- 00:19:01.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.258 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:19:01.258 20:36:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:19:01.258 00:19:01.258 --- 10.0.0.1 ping statistics --- 00:19:01.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.258 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:19:01.258 20:36:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.258 20:36:19 -- nvmf/common.sh@410 -- # return 0 00:19:01.258 20:36:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:01.258 20:36:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.258 20:36:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:01.258 20:36:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:01.258 20:36:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.258 20:36:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:01.258 20:36:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:01.258 20:36:19 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:19:01.258 20:36:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:01.258 20:36:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:01.258 20:36:19 -- common/autotest_common.sh@10 -- # set +x 00:19:01.258 20:36:19 -- nvmf/common.sh@469 -- # nvmfpid=3508700 00:19:01.258 20:36:19 -- nvmf/common.sh@470 -- # waitforlisten 3508700 00:19:01.258 20:36:19 -- common/autotest_common.sh@819 -- # '[' -z 3508700 ']' 00:19:01.258 20:36:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.258 20:36:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:01.258 20:36:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.258 20:36:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:01.258 20:36:19 -- common/autotest_common.sh@10 -- # set +x 00:19:01.258 20:36:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:01.258 [2024-04-26 20:36:19.471668] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:01.258 [2024-04-26 20:36:19.471750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.258 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.258 [2024-04-26 20:36:19.570165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.517 [2024-04-26 20:36:19.667685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.518 [2024-04-26 20:36:19.667858] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.518 [2024-04-26 20:36:19.667870] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.518 [2024-04-26 20:36:19.667879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.518 [2024-04-26 20:36:19.667949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.518 [2024-04-26 20:36:19.668056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.518 [2024-04-26 20:36:19.668152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.518 [2024-04-26 20:36:19.668162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.086 20:36:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:02.086 20:36:20 -- common/autotest_common.sh@852 -- # return 0 00:19:02.086 20:36:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:02.086 20:36:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:02.086 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.086 20:36:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.086 20:36:20 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:19:02.086 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.086 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.086 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.086 20:36:20 -- target/rpc.sh@26 -- # stats='{ 00:19:02.086 "tick_rate": 1900000000, 00:19:02.086 "poll_groups": [ 00:19:02.086 { 00:19:02.086 "name": "nvmf_tgt_poll_group_0", 00:19:02.086 "admin_qpairs": 0, 00:19:02.086 "io_qpairs": 0, 00:19:02.086 "current_admin_qpairs": 0, 00:19:02.086 "current_io_qpairs": 0, 00:19:02.086 "pending_bdev_io": 0, 00:19:02.086 "completed_nvme_io": 0, 00:19:02.086 "transports": [] 00:19:02.086 }, 00:19:02.086 { 00:19:02.086 "name": "nvmf_tgt_poll_group_1", 00:19:02.086 "admin_qpairs": 0, 00:19:02.086 "io_qpairs": 0, 00:19:02.086 "current_admin_qpairs": 0, 00:19:02.086 "current_io_qpairs": 0, 00:19:02.086 "pending_bdev_io": 0, 00:19:02.086 "completed_nvme_io": 0, 00:19:02.086 "transports": [] 00:19:02.086 }, 00:19:02.086 { 00:19:02.086 "name": "nvmf_tgt_poll_group_2", 00:19:02.086 "admin_qpairs": 0, 00:19:02.086 "io_qpairs": 0, 00:19:02.086 "current_admin_qpairs": 0, 00:19:02.086 "current_io_qpairs": 0, 00:19:02.086 "pending_bdev_io": 0, 00:19:02.086 "completed_nvme_io": 0, 00:19:02.086 "transports": [] 00:19:02.086 }, 00:19:02.086 { 00:19:02.086 "name": "nvmf_tgt_poll_group_3", 00:19:02.086 "admin_qpairs": 0, 00:19:02.086 "io_qpairs": 0, 00:19:02.086 "current_admin_qpairs": 0, 00:19:02.086 "current_io_qpairs": 0, 00:19:02.086 "pending_bdev_io": 0, 00:19:02.086 "completed_nvme_io": 0, 00:19:02.086 "transports": [] 00:19:02.086 } 00:19:02.086 ] 00:19:02.086 }' 00:19:02.086 20:36:20 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:19:02.086 20:36:20 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:19:02.086 20:36:20 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:19:02.086 20:36:20 -- target/rpc.sh@15 -- # wc -l 00:19:02.086 20:36:20 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:19:02.086 20:36:20 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:19:02.086 20:36:20 -- target/rpc.sh@29 -- # [[ null == null ]] 00:19:02.086 20:36:20 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.086 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.086 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.086 [2024-04-26 20:36:20.334150] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.086 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.086 20:36:20 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:19:02.086 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.086 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.086 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.086 20:36:20 -- target/rpc.sh@33 -- # stats='{ 00:19:02.086 "tick_rate": 1900000000, 00:19:02.086 "poll_groups": [ 00:19:02.086 { 00:19:02.086 "name": "nvmf_tgt_poll_group_0", 00:19:02.086 "admin_qpairs": 0, 00:19:02.086 "io_qpairs": 0, 00:19:02.086 "current_admin_qpairs": 0, 00:19:02.086 "current_io_qpairs": 0, 00:19:02.086 "pending_bdev_io": 0, 00:19:02.086 "completed_nvme_io": 0, 00:19:02.086 "transports": [ 00:19:02.086 { 00:19:02.086 "trtype": "TCP" 00:19:02.086 } 00:19:02.086 ] 00:19:02.086 }, 00:19:02.086 { 00:19:02.086 "name": "nvmf_tgt_poll_group_1", 00:19:02.086 "admin_qpairs": 0, 00:19:02.086 "io_qpairs": 0, 00:19:02.086 "current_admin_qpairs": 0, 00:19:02.086 "current_io_qpairs": 0, 00:19:02.086 "pending_bdev_io": 0, 00:19:02.086 "completed_nvme_io": 0, 00:19:02.086 "transports": [ 00:19:02.086 { 00:19:02.086 "trtype": "TCP" 00:19:02.086 } 00:19:02.086 ] 00:19:02.086 }, 00:19:02.086 { 00:19:02.086 "name": "nvmf_tgt_poll_group_2", 00:19:02.086 "admin_qpairs": 0, 00:19:02.086 "io_qpairs": 0, 00:19:02.086 "current_admin_qpairs": 0, 00:19:02.086 "current_io_qpairs": 0, 00:19:02.086 "pending_bdev_io": 0, 00:19:02.086 "completed_nvme_io": 0, 00:19:02.086 "transports": [ 00:19:02.086 { 00:19:02.086 "trtype": "TCP" 00:19:02.086 } 00:19:02.086 ] 00:19:02.086 }, 00:19:02.086 { 00:19:02.086 "name": "nvmf_tgt_poll_group_3", 00:19:02.086 "admin_qpairs": 0, 00:19:02.086 "io_qpairs": 0, 00:19:02.086 "current_admin_qpairs": 0, 00:19:02.086 "current_io_qpairs": 0, 00:19:02.086 "pending_bdev_io": 0, 00:19:02.087 "completed_nvme_io": 0, 00:19:02.087 "transports": [ 00:19:02.087 { 00:19:02.087 "trtype": "TCP" 00:19:02.087 } 00:19:02.087 ] 00:19:02.087 } 00:19:02.087 ] 00:19:02.087 }' 00:19:02.087 20:36:20 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:19:02.087 20:36:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:02.087 20:36:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:02.087 20:36:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:02.087 20:36:20 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:19:02.087 20:36:20 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:19:02.087 20:36:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:02.087 20:36:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:02.087 20:36:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:02.346 20:36:20 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:19:02.346 20:36:20 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:19:02.346 20:36:20 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:19:02.346 20:36:20 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:19:02.346 20:36:20 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:02.346 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.346 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.346 Malloc1 00:19:02.346 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.346 20:36:20 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:02.346 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.346 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.346 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.346 20:36:20 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:02.346 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.346 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.346 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.346 20:36:20 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:19:02.346 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.346 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.346 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.346 20:36:20 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.346 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.346 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.346 [2024-04-26 20:36:20.498824] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.346 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.346 20:36:20 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:19:02.346 20:36:20 -- common/autotest_common.sh@640 -- # local es=0 00:19:02.346 20:36:20 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:19:02.346 20:36:20 -- common/autotest_common.sh@628 -- # local arg=nvme 00:19:02.346 20:36:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:02.346 20:36:20 -- common/autotest_common.sh@632 -- # type -t nvme 00:19:02.346 20:36:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:02.346 20:36:20 -- common/autotest_common.sh@634 -- # type -P nvme 00:19:02.346 20:36:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:02.346 20:36:20 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:19:02.346 20:36:20 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:19:02.346 20:36:20 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:19:02.346 [2024-04-26 20:36:20.527570] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda' 00:19:02.346 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:02.346 could not add new controller: failed to write to nvme-fabrics device 00:19:02.346 20:36:20 -- common/autotest_common.sh@643 -- # es=1 00:19:02.346 20:36:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:02.346 20:36:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:02.346 20:36:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:02.346 20:36:20 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:02.346 20:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.346 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.346 20:36:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.346 20:36:20 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:03.726 20:36:21 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.726 20:36:21 -- common/autotest_common.sh@1177 -- # local i=0 00:19:03.726 20:36:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.726 20:36:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:03.726 20:36:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:05.637 20:36:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:05.637 20:36:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:05.637 20:36:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.637 20:36:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:05.637 20:36:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.637 20:36:23 -- common/autotest_common.sh@1187 -- # return 0 00:19:05.637 20:36:23 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:05.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.895 20:36:24 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:05.895 20:36:24 -- common/autotest_common.sh@1198 -- # local i=0 00:19:05.895 20:36:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:05.895 20:36:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:05.895 20:36:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:05.895 20:36:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:05.895 20:36:24 -- common/autotest_common.sh@1210 -- # return 0 00:19:05.895 20:36:24 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:05.895 20:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.895 20:36:24 -- common/autotest_common.sh@10 -- # set +x 00:19:05.895 20:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.895 20:36:24 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:05.896 20:36:24 -- common/autotest_common.sh@640 -- # local es=0 00:19:05.896 20:36:24 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:05.896 20:36:24 -- common/autotest_common.sh@628 -- # local arg=nvme 00:19:05.896 20:36:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:05.896 20:36:24 -- common/autotest_common.sh@632 -- # type -t nvme 00:19:05.896 20:36:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:05.896 20:36:24 -- common/autotest_common.sh@634 -- # type -P nvme 00:19:05.896 20:36:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:05.896 20:36:24 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:19:05.896 20:36:24 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:19:05.896 20:36:24 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:05.896 [2024-04-26 20:36:24.193238] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda' 00:19:05.896 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:05.896 could not add new controller: failed to write to nvme-fabrics device 00:19:05.896 20:36:24 -- common/autotest_common.sh@643 -- # es=1 00:19:05.896 20:36:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:05.896 20:36:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:05.896 20:36:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:05.896 20:36:24 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:19:05.896 20:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.896 20:36:24 -- common/autotest_common.sh@10 -- # set +x 00:19:05.896 20:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.896 20:36:24 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:07.277 20:36:25 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:19:07.277 20:36:25 -- common/autotest_common.sh@1177 -- # local i=0 00:19:07.277 20:36:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.277 20:36:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:07.277 20:36:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:09.269 20:36:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:09.269 20:36:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:09.269 20:36:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:09.269 20:36:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:09.269 20:36:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.269 20:36:27 -- common/autotest_common.sh@1187 -- # return 0 00:19:09.269 20:36:27 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:09.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.526 20:36:27 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:09.526 20:36:27 -- common/autotest_common.sh@1198 -- # local i=0 00:19:09.526 20:36:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:09.526 20:36:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:09.526 20:36:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:09.526 20:36:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:09.526 20:36:27 -- common/autotest_common.sh@1210 -- # return 0 00:19:09.526 20:36:27 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.526 20:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.526 20:36:27 -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 20:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.526 20:36:27 -- target/rpc.sh@81 -- # seq 1 5 00:19:09.526 20:36:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:09.526 20:36:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:09.526 20:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.526 20:36:27 -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 20:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.526 20:36:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.526 20:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.526 20:36:27 -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 [2024-04-26 20:36:27.804429] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.526 20:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.526 20:36:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:09.526 20:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.526 20:36:27 -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 20:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.526 20:36:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:09.526 20:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.526 20:36:27 -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 20:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.526 20:36:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:10.905 20:36:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:10.905 20:36:29 -- common/autotest_common.sh@1177 -- # local i=0 00:19:10.905 20:36:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:10.905 20:36:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:10.905 20:36:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:13.441 20:36:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:13.441 20:36:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:13.441 20:36:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:13.441 20:36:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:13.441 20:36:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.441 20:36:31 -- common/autotest_common.sh@1187 -- # return 0 00:19:13.441 20:36:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:13.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:13.441 20:36:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:13.441 20:36:31 -- common/autotest_common.sh@1198 -- # local i=0 00:19:13.441 20:36:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:13.441 20:36:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:13.441 20:36:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:13.441 20:36:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:13.441 20:36:31 -- common/autotest_common.sh@1210 -- # return 0 00:19:13.441 20:36:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:13.441 20:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.441 20:36:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.441 20:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.441 20:36:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.441 20:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.441 20:36:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.441 20:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.441 20:36:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:13.441 20:36:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:13.441 20:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.441 20:36:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.441 20:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.441 20:36:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.441 20:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.441 20:36:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.441 [2024-04-26 20:36:31.478561] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.441 20:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.441 20:36:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:13.441 20:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.441 20:36:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.441 20:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.441 20:36:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:13.441 20:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.441 20:36:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.441 20:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.441 20:36:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:14.819 20:36:32 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:14.819 20:36:32 -- common/autotest_common.sh@1177 -- # local i=0 00:19:14.819 20:36:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:14.819 20:36:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:14.819 20:36:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:16.724 20:36:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:16.724 20:36:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:16.724 20:36:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:16.724 20:36:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:16.724 20:36:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:16.724 20:36:34 -- common/autotest_common.sh@1187 -- # return 0 00:19:16.724 20:36:34 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:16.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.984 20:36:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:16.984 20:36:35 -- common/autotest_common.sh@1198 -- # local i=0 00:19:16.984 20:36:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:16.984 20:36:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.984 20:36:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:16.984 20:36:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.984 20:36:35 -- common/autotest_common.sh@1210 -- # return 0 00:19:16.984 20:36:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:16.984 20:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.984 20:36:35 -- common/autotest_common.sh@10 -- # set +x 00:19:16.984 20:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.984 20:36:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.984 20:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.984 20:36:35 -- common/autotest_common.sh@10 -- # set +x 00:19:16.984 20:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.984 20:36:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:16.984 20:36:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:16.984 20:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.984 20:36:35 -- common/autotest_common.sh@10 -- # set +x 00:19:16.984 20:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.984 20:36:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.984 20:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.984 20:36:35 -- common/autotest_common.sh@10 -- # set +x 00:19:16.984 [2024-04-26 20:36:35.192981] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.984 20:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.984 20:36:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:16.984 20:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.984 20:36:35 -- common/autotest_common.sh@10 -- # set +x 00:19:16.984 20:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.984 20:36:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:16.984 20:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.984 20:36:35 -- common/autotest_common.sh@10 -- # set +x 00:19:16.984 20:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.984 20:36:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:18.359 20:36:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:18.359 20:36:36 -- common/autotest_common.sh@1177 -- # local i=0 00:19:18.359 20:36:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:18.359 20:36:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:18.359 20:36:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:20.900 20:36:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:20.900 20:36:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:20.900 20:36:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:20.900 20:36:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:20.900 20:36:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:20.900 20:36:38 -- common/autotest_common.sh@1187 -- # return 0 00:19:20.900 20:36:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:20.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.900 20:36:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:20.900 20:36:38 -- common/autotest_common.sh@1198 -- # local i=0 00:19:20.900 20:36:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:20.900 20:36:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:20.900 20:36:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:20.900 20:36:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:20.900 20:36:38 -- common/autotest_common.sh@1210 -- # return 0 00:19:20.900 20:36:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:20.900 20:36:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.900 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:19:20.900 20:36:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.900 20:36:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.900 20:36:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.900 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:19:20.900 20:36:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.900 20:36:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:20.900 20:36:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:20.900 20:36:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.900 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:19:20.900 20:36:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.900 20:36:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.900 20:36:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.900 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:19:20.900 [2024-04-26 20:36:38.863311] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.900 20:36:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.900 20:36:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:20.900 20:36:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.900 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:19:20.900 20:36:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.900 20:36:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:20.900 20:36:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.900 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:19:20.900 20:36:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.900 20:36:38 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:22.278 20:36:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:22.278 20:36:40 -- common/autotest_common.sh@1177 -- # local i=0 00:19:22.278 20:36:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:22.278 20:36:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:22.278 20:36:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:24.182 20:36:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:24.182 20:36:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:24.182 20:36:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:24.182 20:36:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:24.182 20:36:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.182 20:36:42 -- common/autotest_common.sh@1187 -- # return 0 00:19:24.182 20:36:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:24.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.182 20:36:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:24.182 20:36:42 -- common/autotest_common.sh@1198 -- # local i=0 00:19:24.182 20:36:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:24.182 20:36:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.182 20:36:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:24.182 20:36:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.182 20:36:42 -- common/autotest_common.sh@1210 -- # return 0 00:19:24.182 20:36:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:24.182 20:36:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.182 20:36:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.182 20:36:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.182 20:36:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.182 20:36:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.182 20:36:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.182 20:36:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.182 20:36:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:24.182 20:36:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:24.182 20:36:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.442 20:36:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.442 20:36:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.442 20:36:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.442 20:36:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.442 20:36:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.442 [2024-04-26 20:36:42.534298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.442 20:36:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.442 20:36:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:24.442 20:36:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.442 20:36:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.442 20:36:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.442 20:36:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:24.442 20:36:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.442 20:36:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.442 20:36:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.442 20:36:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:25.820 20:36:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:25.820 20:36:44 -- common/autotest_common.sh@1177 -- # local i=0 00:19:25.820 20:36:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.820 20:36:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:25.820 20:36:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:27.725 20:36:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:27.725 20:36:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:27.725 20:36:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:27.725 20:36:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:27.725 20:36:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:27.725 20:36:46 -- common/autotest_common.sh@1187 -- # return 0 00:19:27.725 20:36:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:27.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.983 20:36:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:27.983 20:36:46 -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.983 20:36:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:27.983 20:36:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:27.983 20:36:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:27.983 20:36:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:27.983 20:36:46 -- common/autotest_common.sh@1210 -- # return 0 00:19:27.983 20:36:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.983 20:36:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.983 20:36:46 -- target/rpc.sh@99 -- # seq 1 5 00:19:27.983 20:36:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:27.983 20:36:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.983 20:36:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 [2024-04-26 20:36:46.281467] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.983 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.983 20:36:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.983 20:36:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.983 20:36:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.983 20:36:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.983 20:36:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:27.983 20:36:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:27.983 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.983 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 [2024-04-26 20:36:46.329435] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:28.242 20:36:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 [2024-04-26 20:36:46.377490] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:28.242 20:36:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:28.242 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.242 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.242 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.242 20:36:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 [2024-04-26 20:36:46.425541] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:28.243 20:36:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 [2024-04-26 20:36:46.473600] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:28.243 20:36:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.243 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 20:36:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.243 20:36:46 -- target/rpc.sh@110 -- # stats='{ 00:19:28.243 "tick_rate": 1900000000, 00:19:28.243 "poll_groups": [ 00:19:28.243 { 00:19:28.243 "name": "nvmf_tgt_poll_group_0", 00:19:28.243 "admin_qpairs": 0, 00:19:28.243 "io_qpairs": 224, 00:19:28.243 "current_admin_qpairs": 0, 00:19:28.243 "current_io_qpairs": 0, 00:19:28.243 "pending_bdev_io": 0, 00:19:28.243 "completed_nvme_io": 445, 00:19:28.243 "transports": [ 00:19:28.243 { 00:19:28.243 "trtype": "TCP" 00:19:28.243 } 00:19:28.243 ] 00:19:28.243 }, 00:19:28.243 { 00:19:28.243 "name": "nvmf_tgt_poll_group_1", 00:19:28.243 "admin_qpairs": 1, 00:19:28.243 "io_qpairs": 223, 00:19:28.243 "current_admin_qpairs": 0, 00:19:28.243 "current_io_qpairs": 0, 00:19:28.243 "pending_bdev_io": 0, 00:19:28.243 "completed_nvme_io": 297, 00:19:28.243 "transports": [ 00:19:28.243 { 00:19:28.243 "trtype": "TCP" 00:19:28.243 } 00:19:28.243 ] 00:19:28.243 }, 00:19:28.243 { 00:19:28.243 "name": "nvmf_tgt_poll_group_2", 00:19:28.243 "admin_qpairs": 6, 00:19:28.243 "io_qpairs": 218, 00:19:28.243 "current_admin_qpairs": 0, 00:19:28.243 "current_io_qpairs": 0, 00:19:28.243 "pending_bdev_io": 0, 00:19:28.243 "completed_nvme_io": 256, 00:19:28.243 "transports": [ 00:19:28.243 { 00:19:28.243 "trtype": "TCP" 00:19:28.243 } 00:19:28.243 ] 00:19:28.243 }, 00:19:28.243 { 00:19:28.243 "name": "nvmf_tgt_poll_group_3", 00:19:28.243 "admin_qpairs": 0, 00:19:28.243 "io_qpairs": 224, 00:19:28.243 "current_admin_qpairs": 0, 00:19:28.243 "current_io_qpairs": 0, 00:19:28.243 "pending_bdev_io": 0, 00:19:28.243 "completed_nvme_io": 241, 00:19:28.243 "transports": [ 00:19:28.243 { 00:19:28.243 "trtype": "TCP" 00:19:28.243 } 00:19:28.243 ] 00:19:28.243 } 00:19:28.243 ] 00:19:28.243 }' 00:19:28.243 20:36:46 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:28.243 20:36:46 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:28.243 20:36:46 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:28.243 20:36:46 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:28.243 20:36:46 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:28.243 20:36:46 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:28.243 20:36:46 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:28.243 20:36:46 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:28.243 20:36:46 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:28.502 20:36:46 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:19:28.502 20:36:46 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:19:28.502 20:36:46 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:28.502 20:36:46 -- target/rpc.sh@123 -- # nvmftestfini 00:19:28.502 20:36:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:28.502 20:36:46 -- nvmf/common.sh@116 -- # sync 00:19:28.502 20:36:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:28.502 20:36:46 -- nvmf/common.sh@119 -- # set +e 00:19:28.502 20:36:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:28.502 20:36:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:28.502 rmmod nvme_tcp 00:19:28.502 rmmod nvme_fabrics 00:19:28.502 rmmod nvme_keyring 00:19:28.502 20:36:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:28.502 20:36:46 -- nvmf/common.sh@123 -- # set -e 00:19:28.502 20:36:46 -- nvmf/common.sh@124 -- # return 0 00:19:28.502 20:36:46 -- nvmf/common.sh@477 -- # '[' -n 3508700 ']' 00:19:28.502 20:36:46 -- nvmf/common.sh@478 -- # killprocess 3508700 00:19:28.502 20:36:46 -- common/autotest_common.sh@926 -- # '[' -z 3508700 ']' 00:19:28.502 20:36:46 -- common/autotest_common.sh@930 -- # kill -0 3508700 00:19:28.502 20:36:46 -- common/autotest_common.sh@931 -- # uname 00:19:28.502 20:36:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:28.502 20:36:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3508700 00:19:28.502 20:36:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:28.502 20:36:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:28.502 20:36:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3508700' 00:19:28.502 killing process with pid 3508700 00:19:28.502 20:36:46 -- common/autotest_common.sh@945 -- # kill 3508700 00:19:28.502 20:36:46 -- common/autotest_common.sh@950 -- # wait 3508700 00:19:29.070 20:36:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:29.070 20:36:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:29.070 20:36:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:29.070 20:36:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.070 20:36:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:29.070 20:36:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.070 20:36:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.070 20:36:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.980 20:36:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:30.980 00:19:30.980 real 0m35.451s 00:19:30.980 user 1m50.857s 00:19:30.980 sys 0m5.446s 00:19:30.980 20:36:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.980 20:36:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.980 ************************************ 00:19:30.980 END TEST nvmf_rpc 00:19:30.980 ************************************ 00:19:30.980 20:36:49 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:30.980 20:36:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:30.980 20:36:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:30.980 20:36:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.980 ************************************ 00:19:30.980 START TEST nvmf_invalid 00:19:30.980 ************************************ 00:19:30.980 20:36:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:31.240 * Looking for test storage... 00:19:31.240 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:31.240 20:36:49 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.240 20:36:49 -- nvmf/common.sh@7 -- # uname -s 00:19:31.240 20:36:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.240 20:36:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.240 20:36:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.240 20:36:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.240 20:36:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.240 20:36:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.240 20:36:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.240 20:36:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.240 20:36:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.240 20:36:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.240 20:36:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:31.241 20:36:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:31.241 20:36:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.241 20:36:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.241 20:36:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:31.241 20:36:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:31.241 20:36:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.241 20:36:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.241 20:36:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.241 20:36:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.241 20:36:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.241 20:36:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.241 20:36:49 -- paths/export.sh@5 -- # export PATH 00:19:31.241 20:36:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.241 20:36:49 -- nvmf/common.sh@46 -- # : 0 00:19:31.241 20:36:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:31.241 20:36:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:31.241 20:36:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:31.241 20:36:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.241 20:36:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.241 20:36:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:31.241 20:36:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:31.241 20:36:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:31.241 20:36:49 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:31.241 20:36:49 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:31.241 20:36:49 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:31.241 20:36:49 -- target/invalid.sh@14 -- # target=foobar 00:19:31.241 20:36:49 -- target/invalid.sh@16 -- # RANDOM=0 00:19:31.241 20:36:49 -- target/invalid.sh@34 -- # nvmftestinit 00:19:31.241 20:36:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:31.241 20:36:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.241 20:36:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:31.241 20:36:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:31.241 20:36:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:31.241 20:36:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.241 20:36:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.241 20:36:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.241 20:36:49 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:31.241 20:36:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:31.241 20:36:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:31.241 20:36:49 -- common/autotest_common.sh@10 -- # set +x 00:19:36.519 20:36:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:36.519 20:36:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:36.519 20:36:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:36.519 20:36:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:36.519 20:36:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:36.519 20:36:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:36.519 20:36:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:36.519 20:36:54 -- nvmf/common.sh@294 -- # net_devs=() 00:19:36.519 20:36:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:36.519 20:36:54 -- nvmf/common.sh@295 -- # e810=() 00:19:36.519 20:36:54 -- nvmf/common.sh@295 -- # local -ga e810 00:19:36.519 20:36:54 -- nvmf/common.sh@296 -- # x722=() 00:19:36.519 20:36:54 -- nvmf/common.sh@296 -- # local -ga x722 00:19:36.519 20:36:54 -- nvmf/common.sh@297 -- # mlx=() 00:19:36.519 20:36:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:36.519 20:36:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.519 20:36:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:36.519 20:36:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:36.519 20:36:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:36.519 20:36:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:36.519 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:36.519 20:36:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:36.519 20:36:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:36.519 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:36.519 20:36:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:36.519 20:36:54 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:36.519 20:36:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:36.519 20:36:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.519 20:36:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:36.519 20:36:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.519 20:36:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:36.519 Found net devices under 0000:27:00.0: cvl_0_0 00:19:36.519 20:36:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.520 20:36:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:36.520 20:36:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.520 20:36:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:36.520 20:36:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.520 20:36:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:36.520 Found net devices under 0000:27:00.1: cvl_0_1 00:19:36.520 20:36:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.520 20:36:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:36.520 20:36:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:36.520 20:36:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:36.520 20:36:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:36.520 20:36:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:36.520 20:36:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.520 20:36:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.520 20:36:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.520 20:36:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:36.520 20:36:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.520 20:36:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.520 20:36:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:36.520 20:36:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.520 20:36:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.520 20:36:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:36.520 20:36:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:36.520 20:36:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.520 20:36:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.520 20:36:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.520 20:36:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.520 20:36:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:36.520 20:36:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.520 20:36:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.520 20:36:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.520 20:36:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:36.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:19:36.520 00:19:36.520 --- 10.0.0.2 ping statistics --- 00:19:36.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.520 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:19:36.520 20:36:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:19:36.520 00:19:36.520 --- 10.0.0.1 ping statistics --- 00:19:36.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.520 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:19:36.520 20:36:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.520 20:36:54 -- nvmf/common.sh@410 -- # return 0 00:19:36.520 20:36:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:36.520 20:36:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.520 20:36:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:36.520 20:36:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:36.520 20:36:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.520 20:36:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:36.520 20:36:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:36.520 20:36:54 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:36.520 20:36:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:36.520 20:36:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:36.520 20:36:54 -- common/autotest_common.sh@10 -- # set +x 00:19:36.520 20:36:54 -- nvmf/common.sh@469 -- # nvmfpid=3518206 00:19:36.520 20:36:54 -- nvmf/common.sh@470 -- # waitforlisten 3518206 00:19:36.520 20:36:54 -- common/autotest_common.sh@819 -- # '[' -z 3518206 ']' 00:19:36.520 20:36:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.520 20:36:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.520 20:36:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.520 20:36:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.520 20:36:54 -- common/autotest_common.sh@10 -- # set +x 00:19:36.520 20:36:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:36.781 [2024-04-26 20:36:54.887212] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:36.781 [2024-04-26 20:36:54.887325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.781 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.781 [2024-04-26 20:36:55.015649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.781 [2024-04-26 20:36:55.113391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:36.781 [2024-04-26 20:36:55.113580] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.781 [2024-04-26 20:36:55.113594] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.781 [2024-04-26 20:36:55.113604] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.781 [2024-04-26 20:36:55.113765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.781 [2024-04-26 20:36:55.113862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.781 [2024-04-26 20:36:55.113982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.781 [2024-04-26 20:36:55.113992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.349 20:36:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.349 20:36:55 -- common/autotest_common.sh@852 -- # return 0 00:19:37.349 20:36:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:37.349 20:36:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:37.349 20:36:55 -- common/autotest_common.sh@10 -- # set +x 00:19:37.349 20:36:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.349 20:36:55 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:37.349 20:36:55 -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28666 00:19:37.607 [2024-04-26 20:36:55.769349] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:37.607 20:36:55 -- target/invalid.sh@40 -- # out='request: 00:19:37.607 { 00:19:37.607 "nqn": "nqn.2016-06.io.spdk:cnode28666", 00:19:37.607 "tgt_name": "foobar", 00:19:37.607 "method": "nvmf_create_subsystem", 00:19:37.607 "req_id": 1 00:19:37.607 } 00:19:37.607 Got JSON-RPC error response 00:19:37.607 response: 00:19:37.607 { 00:19:37.607 "code": -32603, 00:19:37.607 "message": "Unable to find target foobar" 00:19:37.607 }' 00:19:37.607 20:36:55 -- target/invalid.sh@41 -- # [[ request: 00:19:37.607 { 00:19:37.607 "nqn": "nqn.2016-06.io.spdk:cnode28666", 00:19:37.607 "tgt_name": "foobar", 00:19:37.607 "method": "nvmf_create_subsystem", 00:19:37.607 "req_id": 1 00:19:37.607 } 00:19:37.607 Got JSON-RPC error response 00:19:37.607 response: 00:19:37.607 { 00:19:37.607 "code": -32603, 00:19:37.607 "message": "Unable to find target foobar" 00:19:37.607 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:37.607 20:36:55 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:37.607 20:36:55 -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19303 00:19:37.607 [2024-04-26 20:36:55.909567] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19303: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:37.607 20:36:55 -- target/invalid.sh@45 -- # out='request: 00:19:37.607 { 00:19:37.607 "nqn": "nqn.2016-06.io.spdk:cnode19303", 00:19:37.607 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:37.607 "method": "nvmf_create_subsystem", 00:19:37.607 "req_id": 1 00:19:37.607 } 00:19:37.607 Got JSON-RPC error response 00:19:37.607 response: 00:19:37.607 { 00:19:37.607 "code": -32602, 00:19:37.607 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:37.607 }' 00:19:37.607 20:36:55 -- target/invalid.sh@46 -- # [[ request: 00:19:37.607 { 00:19:37.607 "nqn": "nqn.2016-06.io.spdk:cnode19303", 00:19:37.607 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:37.607 "method": "nvmf_create_subsystem", 00:19:37.607 "req_id": 1 00:19:37.607 } 00:19:37.607 Got JSON-RPC error response 00:19:37.607 response: 00:19:37.607 { 00:19:37.607 "code": -32602, 00:19:37.607 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:37.607 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:37.607 20:36:55 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:37.607 20:36:55 -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2906 00:19:37.868 [2024-04-26 20:36:56.053742] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2906: invalid model number 'SPDK_Controller' 00:19:37.868 20:36:56 -- target/invalid.sh@50 -- # out='request: 00:19:37.868 { 00:19:37.868 "nqn": "nqn.2016-06.io.spdk:cnode2906", 00:19:37.868 "model_number": "SPDK_Controller\u001f", 00:19:37.868 "method": "nvmf_create_subsystem", 00:19:37.868 "req_id": 1 00:19:37.868 } 00:19:37.868 Got JSON-RPC error response 00:19:37.868 response: 00:19:37.868 { 00:19:37.868 "code": -32602, 00:19:37.868 "message": "Invalid MN SPDK_Controller\u001f" 00:19:37.868 }' 00:19:37.868 20:36:56 -- target/invalid.sh@51 -- # [[ request: 00:19:37.868 { 00:19:37.868 "nqn": "nqn.2016-06.io.spdk:cnode2906", 00:19:37.868 "model_number": "SPDK_Controller\u001f", 00:19:37.868 "method": "nvmf_create_subsystem", 00:19:37.868 "req_id": 1 00:19:37.869 } 00:19:37.869 Got JSON-RPC error response 00:19:37.869 response: 00:19:37.869 { 00:19:37.869 "code": -32602, 00:19:37.869 "message": "Invalid MN SPDK_Controller\u001f" 00:19:37.869 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:37.869 20:36:56 -- target/invalid.sh@54 -- # gen_random_s 21 00:19:37.869 20:36:56 -- target/invalid.sh@19 -- # local length=21 ll 00:19:37.869 20:36:56 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:37.869 20:36:56 -- target/invalid.sh@21 -- # local chars 00:19:37.869 20:36:56 -- target/invalid.sh@22 -- # local string 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 87 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x57' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=W 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 120 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x78' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=x 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 121 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=y 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 38 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x26' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='&' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 126 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='~' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 92 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='\' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 119 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x77' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=w 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 109 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=m 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 124 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='|' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 34 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='"' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 73 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=I 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 73 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=I 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 116 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x74' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=t 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 92 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='\' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 34 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='"' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 90 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=Z 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 126 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='~' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 80 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+=P 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 40 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x28' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='(' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 62 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='>' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # printf %x 36 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x24' 00:19:37.869 20:36:56 -- target/invalid.sh@25 -- # string+='$' 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:37.869 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:37.869 20:36:56 -- target/invalid.sh@28 -- # [[ W == \- ]] 00:19:37.869 20:36:56 -- target/invalid.sh@31 -- # echo 'Wxy&~\wm|"IIt\"Z~P(>$' 00:19:37.869 20:36:56 -- target/invalid.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Wxy&~\wm|"IIt\"Z~P(>$' nqn.2016-06.io.spdk:cnode14534 00:19:38.132 [2024-04-26 20:36:56.294060] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14534: invalid serial number 'Wxy&~\wm|"IIt\"Z~P(>$' 00:19:38.132 20:36:56 -- target/invalid.sh@54 -- # out='request: 00:19:38.132 { 00:19:38.132 "nqn": "nqn.2016-06.io.spdk:cnode14534", 00:19:38.132 "serial_number": "Wxy&~\\wm|\"IIt\\\"Z~P(>$", 00:19:38.132 "method": "nvmf_create_subsystem", 00:19:38.132 "req_id": 1 00:19:38.132 } 00:19:38.132 Got JSON-RPC error response 00:19:38.132 response: 00:19:38.132 { 00:19:38.132 "code": -32602, 00:19:38.132 "message": "Invalid SN Wxy&~\\wm|\"IIt\\\"Z~P(>$" 00:19:38.132 }' 00:19:38.132 20:36:56 -- target/invalid.sh@55 -- # [[ request: 00:19:38.132 { 00:19:38.132 "nqn": "nqn.2016-06.io.spdk:cnode14534", 00:19:38.132 "serial_number": "Wxy&~\\wm|\"IIt\\\"Z~P(>$", 00:19:38.132 "method": "nvmf_create_subsystem", 00:19:38.132 "req_id": 1 00:19:38.132 } 00:19:38.132 Got JSON-RPC error response 00:19:38.132 response: 00:19:38.132 { 00:19:38.132 "code": -32602, 00:19:38.132 "message": "Invalid SN Wxy&~\\wm|\"IIt\\\"Z~P(>$" 00:19:38.132 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:38.132 20:36:56 -- target/invalid.sh@58 -- # gen_random_s 41 00:19:38.132 20:36:56 -- target/invalid.sh@19 -- # local length=41 ll 00:19:38.132 20:36:56 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:38.132 20:36:56 -- target/invalid.sh@21 -- # local chars 00:19:38.132 20:36:56 -- target/invalid.sh@22 -- # local string 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 111 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+=o 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 93 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+=']' 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 71 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+=G 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 94 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+='^' 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 39 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x27' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+=\' 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 94 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+='^' 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 34 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+='"' 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 103 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+=g 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 69 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+=E 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 67 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x43' 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # string+=C 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.132 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # printf %x 65 00:19:38.132 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x41' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=A 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 104 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x68' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=h 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 36 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x24' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+='$' 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 100 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x64' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=d 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 100 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x64' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=d 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 73 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=I 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 106 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=j 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 84 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x54' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=T 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 118 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x76' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=v 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 51 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x33' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=3 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 49 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=1 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 91 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+='[' 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 45 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=- 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 58 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=: 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 100 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x64' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=d 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 113 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=q 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 88 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x58' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=X 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 55 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x37' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=7 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 83 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # string+=S 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.133 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # printf %x 125 00:19:38.133 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+='}' 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 113 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=q 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 49 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=1 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 78 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=N 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 117 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x75' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=u 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 100 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x64' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=d 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 45 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=- 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 34 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+='"' 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 103 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=g 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 73 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=I 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 75 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=K 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # printf %x 97 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # echo -e '\x61' 00:19:38.393 20:36:56 -- target/invalid.sh@25 -- # string+=a 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:38.393 20:36:56 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:38.393 20:36:56 -- target/invalid.sh@28 -- # [[ o == \- ]] 00:19:38.393 20:36:56 -- target/invalid.sh@31 -- # echo 'o]G^'\''^"gECAh$ddIjTv31[-:dqX7S}q1Nud-"gIKa' 00:19:38.393 20:36:56 -- target/invalid.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'o]G^'\''^"gECAh$ddIjTv31[-:dqX7S}q1Nud-"gIKa' nqn.2016-06.io.spdk:cnode24777 00:19:38.393 [2024-04-26 20:36:56.666534] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24777: invalid model number 'o]G^'^"gECAh$ddIjTv31[-:dqX7S}q1Nud-"gIKa' 00:19:38.393 20:36:56 -- target/invalid.sh@58 -- # out='request: 00:19:38.393 { 00:19:38.393 "nqn": "nqn.2016-06.io.spdk:cnode24777", 00:19:38.393 "model_number": "o]G^'\''^\"gECAh$ddIjTv31[-:dqX7S}q1Nud-\"gIKa", 00:19:38.393 "method": "nvmf_create_subsystem", 00:19:38.393 "req_id": 1 00:19:38.393 } 00:19:38.393 Got JSON-RPC error response 00:19:38.393 response: 00:19:38.393 { 00:19:38.394 "code": -32602, 00:19:38.394 "message": "Invalid MN o]G^'\''^\"gECAh$ddIjTv31[-:dqX7S}q1Nud-\"gIKa" 00:19:38.394 }' 00:19:38.394 20:36:56 -- target/invalid.sh@59 -- # [[ request: 00:19:38.394 { 00:19:38.394 "nqn": "nqn.2016-06.io.spdk:cnode24777", 00:19:38.394 "model_number": "o]G^'^\"gECAh$ddIjTv31[-:dqX7S}q1Nud-\"gIKa", 00:19:38.394 "method": "nvmf_create_subsystem", 00:19:38.394 "req_id": 1 00:19:38.394 } 00:19:38.394 Got JSON-RPC error response 00:19:38.394 response: 00:19:38.394 { 00:19:38.394 "code": -32602, 00:19:38.394 "message": "Invalid MN o]G^'^\"gECAh$ddIjTv31[-:dqX7S}q1Nud-\"gIKa" 00:19:38.394 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:38.394 20:36:56 -- target/invalid.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:19:38.655 [2024-04-26 20:36:56.818778] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.655 20:36:56 -- target/invalid.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:19:38.914 20:36:56 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:19:38.914 20:36:56 -- target/invalid.sh@67 -- # echo '' 00:19:38.914 20:36:57 -- target/invalid.sh@67 -- # head -n 1 00:19:38.914 20:36:57 -- target/invalid.sh@67 -- # IP= 00:19:38.914 20:36:57 -- target/invalid.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:19:38.914 [2024-04-26 20:36:57.139203] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:19:38.914 20:36:57 -- target/invalid.sh@69 -- # out='request: 00:19:38.914 { 00:19:38.914 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:38.914 "listen_address": { 00:19:38.914 "trtype": "tcp", 00:19:38.914 "traddr": "", 00:19:38.914 "trsvcid": "4421" 00:19:38.914 }, 00:19:38.914 "method": "nvmf_subsystem_remove_listener", 00:19:38.914 "req_id": 1 00:19:38.914 } 00:19:38.914 Got JSON-RPC error response 00:19:38.914 response: 00:19:38.914 { 00:19:38.914 "code": -32602, 00:19:38.914 "message": "Invalid parameters" 00:19:38.914 }' 00:19:38.914 20:36:57 -- target/invalid.sh@70 -- # [[ request: 00:19:38.914 { 00:19:38.914 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:38.914 "listen_address": { 00:19:38.914 "trtype": "tcp", 00:19:38.914 "traddr": "", 00:19:38.914 "trsvcid": "4421" 00:19:38.914 }, 00:19:38.914 "method": "nvmf_subsystem_remove_listener", 00:19:38.914 "req_id": 1 00:19:38.914 } 00:19:38.914 Got JSON-RPC error response 00:19:38.914 response: 00:19:38.914 { 00:19:38.914 "code": -32602, 00:19:38.914 "message": "Invalid parameters" 00:19:38.914 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:19:38.914 20:36:57 -- target/invalid.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2220 -i 0 00:19:39.173 [2024-04-26 20:36:57.299394] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2220: invalid cntlid range [0-65519] 00:19:39.173 20:36:57 -- target/invalid.sh@73 -- # out='request: 00:19:39.173 { 00:19:39.173 "nqn": "nqn.2016-06.io.spdk:cnode2220", 00:19:39.173 "min_cntlid": 0, 00:19:39.173 "method": "nvmf_create_subsystem", 00:19:39.173 "req_id": 1 00:19:39.173 } 00:19:39.173 Got JSON-RPC error response 00:19:39.173 response: 00:19:39.173 { 00:19:39.173 "code": -32602, 00:19:39.173 "message": "Invalid cntlid range [0-65519]" 00:19:39.173 }' 00:19:39.173 20:36:57 -- target/invalid.sh@74 -- # [[ request: 00:19:39.173 { 00:19:39.173 "nqn": "nqn.2016-06.io.spdk:cnode2220", 00:19:39.173 "min_cntlid": 0, 00:19:39.173 "method": "nvmf_create_subsystem", 00:19:39.173 "req_id": 1 00:19:39.173 } 00:19:39.173 Got JSON-RPC error response 00:19:39.173 response: 00:19:39.173 { 00:19:39.173 "code": -32602, 00:19:39.173 "message": "Invalid cntlid range [0-65519]" 00:19:39.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:39.173 20:36:57 -- target/invalid.sh@75 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20298 -i 65520 00:19:39.173 [2024-04-26 20:36:57.459597] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20298: invalid cntlid range [65520-65519] 00:19:39.173 20:36:57 -- target/invalid.sh@75 -- # out='request: 00:19:39.173 { 00:19:39.173 "nqn": "nqn.2016-06.io.spdk:cnode20298", 00:19:39.173 "min_cntlid": 65520, 00:19:39.173 "method": "nvmf_create_subsystem", 00:19:39.173 "req_id": 1 00:19:39.173 } 00:19:39.173 Got JSON-RPC error response 00:19:39.173 response: 00:19:39.173 { 00:19:39.173 "code": -32602, 00:19:39.173 "message": "Invalid cntlid range [65520-65519]" 00:19:39.173 }' 00:19:39.173 20:36:57 -- target/invalid.sh@76 -- # [[ request: 00:19:39.173 { 00:19:39.173 "nqn": "nqn.2016-06.io.spdk:cnode20298", 00:19:39.173 "min_cntlid": 65520, 00:19:39.173 "method": "nvmf_create_subsystem", 00:19:39.173 "req_id": 1 00:19:39.173 } 00:19:39.173 Got JSON-RPC error response 00:19:39.173 response: 00:19:39.173 { 00:19:39.173 "code": -32602, 00:19:39.173 "message": "Invalid cntlid range [65520-65519]" 00:19:39.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:39.173 20:36:57 -- target/invalid.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15111 -I 0 00:19:39.431 [2024-04-26 20:36:57.599781] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15111: invalid cntlid range [1-0] 00:19:39.431 20:36:57 -- target/invalid.sh@77 -- # out='request: 00:19:39.431 { 00:19:39.431 "nqn": "nqn.2016-06.io.spdk:cnode15111", 00:19:39.432 "max_cntlid": 0, 00:19:39.432 "method": "nvmf_create_subsystem", 00:19:39.432 "req_id": 1 00:19:39.432 } 00:19:39.432 Got JSON-RPC error response 00:19:39.432 response: 00:19:39.432 { 00:19:39.432 "code": -32602, 00:19:39.432 "message": "Invalid cntlid range [1-0]" 00:19:39.432 }' 00:19:39.432 20:36:57 -- target/invalid.sh@78 -- # [[ request: 00:19:39.432 { 00:19:39.432 "nqn": "nqn.2016-06.io.spdk:cnode15111", 00:19:39.432 "max_cntlid": 0, 00:19:39.432 "method": "nvmf_create_subsystem", 00:19:39.432 "req_id": 1 00:19:39.432 } 00:19:39.432 Got JSON-RPC error response 00:19:39.432 response: 00:19:39.432 { 00:19:39.432 "code": -32602, 00:19:39.432 "message": "Invalid cntlid range [1-0]" 00:19:39.432 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:39.432 20:36:57 -- target/invalid.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27405 -I 65520 00:19:39.432 [2024-04-26 20:36:57.731944] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27405: invalid cntlid range [1-65520] 00:19:39.432 20:36:57 -- target/invalid.sh@79 -- # out='request: 00:19:39.432 { 00:19:39.432 "nqn": "nqn.2016-06.io.spdk:cnode27405", 00:19:39.432 "max_cntlid": 65520, 00:19:39.432 "method": "nvmf_create_subsystem", 00:19:39.432 "req_id": 1 00:19:39.432 } 00:19:39.432 Got JSON-RPC error response 00:19:39.432 response: 00:19:39.432 { 00:19:39.432 "code": -32602, 00:19:39.432 "message": "Invalid cntlid range [1-65520]" 00:19:39.432 }' 00:19:39.432 20:36:57 -- target/invalid.sh@80 -- # [[ request: 00:19:39.432 { 00:19:39.432 "nqn": "nqn.2016-06.io.spdk:cnode27405", 00:19:39.432 "max_cntlid": 65520, 00:19:39.432 "method": "nvmf_create_subsystem", 00:19:39.432 "req_id": 1 00:19:39.432 } 00:19:39.432 Got JSON-RPC error response 00:19:39.432 response: 00:19:39.432 { 00:19:39.432 "code": -32602, 00:19:39.432 "message": "Invalid cntlid range [1-65520]" 00:19:39.432 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:39.432 20:36:57 -- target/invalid.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31896 -i 6 -I 5 00:19:39.690 [2024-04-26 20:36:57.876154] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31896: invalid cntlid range [6-5] 00:19:39.690 20:36:57 -- target/invalid.sh@83 -- # out='request: 00:19:39.690 { 00:19:39.690 "nqn": "nqn.2016-06.io.spdk:cnode31896", 00:19:39.690 "min_cntlid": 6, 00:19:39.690 "max_cntlid": 5, 00:19:39.690 "method": "nvmf_create_subsystem", 00:19:39.690 "req_id": 1 00:19:39.690 } 00:19:39.690 Got JSON-RPC error response 00:19:39.690 response: 00:19:39.690 { 00:19:39.690 "code": -32602, 00:19:39.690 "message": "Invalid cntlid range [6-5]" 00:19:39.690 }' 00:19:39.690 20:36:57 -- target/invalid.sh@84 -- # [[ request: 00:19:39.690 { 00:19:39.690 "nqn": "nqn.2016-06.io.spdk:cnode31896", 00:19:39.690 "min_cntlid": 6, 00:19:39.690 "max_cntlid": 5, 00:19:39.690 "method": "nvmf_create_subsystem", 00:19:39.690 "req_id": 1 00:19:39.690 } 00:19:39.690 Got JSON-RPC error response 00:19:39.690 response: 00:19:39.690 { 00:19:39.690 "code": -32602, 00:19:39.690 "message": "Invalid cntlid range [6-5]" 00:19:39.690 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:39.690 20:36:57 -- target/invalid.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:19:39.690 20:36:57 -- target/invalid.sh@87 -- # out='request: 00:19:39.690 { 00:19:39.690 "name": "foobar", 00:19:39.690 "method": "nvmf_delete_target", 00:19:39.690 "req_id": 1 00:19:39.690 } 00:19:39.690 Got JSON-RPC error response 00:19:39.690 response: 00:19:39.690 { 00:19:39.690 "code": -32602, 00:19:39.690 "message": "The specified target doesn'\''t exist, cannot delete it." 00:19:39.690 }' 00:19:39.690 20:36:57 -- target/invalid.sh@88 -- # [[ request: 00:19:39.690 { 00:19:39.690 "name": "foobar", 00:19:39.690 "method": "nvmf_delete_target", 00:19:39.690 "req_id": 1 00:19:39.690 } 00:19:39.690 Got JSON-RPC error response 00:19:39.690 response: 00:19:39.690 { 00:19:39.690 "code": -32602, 00:19:39.690 "message": "The specified target doesn't exist, cannot delete it." 00:19:39.690 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:19:39.690 20:36:57 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:19:39.690 20:36:57 -- target/invalid.sh@91 -- # nvmftestfini 00:19:39.690 20:36:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:39.690 20:36:57 -- nvmf/common.sh@116 -- # sync 00:19:39.690 20:36:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:39.690 20:36:57 -- nvmf/common.sh@119 -- # set +e 00:19:39.690 20:36:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:39.690 20:36:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:39.690 rmmod nvme_tcp 00:19:39.690 rmmod nvme_fabrics 00:19:39.690 rmmod nvme_keyring 00:19:39.690 20:36:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:39.690 20:36:58 -- nvmf/common.sh@123 -- # set -e 00:19:39.690 20:36:58 -- nvmf/common.sh@124 -- # return 0 00:19:39.690 20:36:58 -- nvmf/common.sh@477 -- # '[' -n 3518206 ']' 00:19:39.690 20:36:58 -- nvmf/common.sh@478 -- # killprocess 3518206 00:19:39.690 20:36:58 -- common/autotest_common.sh@926 -- # '[' -z 3518206 ']' 00:19:39.690 20:36:58 -- common/autotest_common.sh@930 -- # kill -0 3518206 00:19:39.690 20:36:58 -- common/autotest_common.sh@931 -- # uname 00:19:39.950 20:36:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:39.950 20:36:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3518206 00:19:39.950 20:36:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:39.950 20:36:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:39.950 20:36:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3518206' 00:19:39.950 killing process with pid 3518206 00:19:39.950 20:36:58 -- common/autotest_common.sh@945 -- # kill 3518206 00:19:39.950 20:36:58 -- common/autotest_common.sh@950 -- # wait 3518206 00:19:40.209 20:36:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:40.209 20:36:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:40.209 20:36:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:40.209 20:36:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.209 20:36:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:40.209 20:36:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.209 20:36:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.209 20:36:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.747 20:37:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:42.747 00:19:42.747 real 0m11.282s 00:19:42.747 user 0m16.679s 00:19:42.747 sys 0m4.851s 00:19:42.747 20:37:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.747 20:37:00 -- common/autotest_common.sh@10 -- # set +x 00:19:42.747 ************************************ 00:19:42.747 END TEST nvmf_invalid 00:19:42.747 ************************************ 00:19:42.747 20:37:00 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:42.747 20:37:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:42.747 20:37:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:42.747 20:37:00 -- common/autotest_common.sh@10 -- # set +x 00:19:42.747 ************************************ 00:19:42.747 START TEST nvmf_abort 00:19:42.747 ************************************ 00:19:42.747 20:37:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:42.747 * Looking for test storage... 00:19:42.747 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:42.747 20:37:00 -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.747 20:37:00 -- nvmf/common.sh@7 -- # uname -s 00:19:42.747 20:37:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.747 20:37:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.747 20:37:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.747 20:37:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.747 20:37:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.747 20:37:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.747 20:37:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.747 20:37:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.747 20:37:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.747 20:37:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.747 20:37:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:42.747 20:37:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:42.747 20:37:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.747 20:37:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.747 20:37:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:42.747 20:37:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:42.747 20:37:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.747 20:37:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.747 20:37:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.747 20:37:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.748 20:37:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.748 20:37:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.748 20:37:00 -- paths/export.sh@5 -- # export PATH 00:19:42.748 20:37:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.748 20:37:00 -- nvmf/common.sh@46 -- # : 0 00:19:42.748 20:37:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:42.748 20:37:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:42.748 20:37:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:42.748 20:37:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.748 20:37:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.748 20:37:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:42.748 20:37:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:42.748 20:37:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:42.748 20:37:00 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.748 20:37:00 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:19:42.748 20:37:00 -- target/abort.sh@14 -- # nvmftestinit 00:19:42.748 20:37:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:42.748 20:37:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.748 20:37:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:42.748 20:37:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:42.748 20:37:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:42.748 20:37:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.748 20:37:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.748 20:37:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.748 20:37:00 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:42.748 20:37:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:42.748 20:37:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:42.748 20:37:00 -- common/autotest_common.sh@10 -- # set +x 00:19:48.109 20:37:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:48.109 20:37:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:48.109 20:37:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:48.109 20:37:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:48.109 20:37:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:48.109 20:37:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:48.109 20:37:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:48.109 20:37:05 -- nvmf/common.sh@294 -- # net_devs=() 00:19:48.109 20:37:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:48.109 20:37:05 -- nvmf/common.sh@295 -- # e810=() 00:19:48.109 20:37:05 -- nvmf/common.sh@295 -- # local -ga e810 00:19:48.109 20:37:05 -- nvmf/common.sh@296 -- # x722=() 00:19:48.109 20:37:05 -- nvmf/common.sh@296 -- # local -ga x722 00:19:48.109 20:37:05 -- nvmf/common.sh@297 -- # mlx=() 00:19:48.109 20:37:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:48.109 20:37:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.109 20:37:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:48.109 20:37:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:48.109 20:37:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:48.109 20:37:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:48.109 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:48.109 20:37:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:48.109 20:37:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:48.109 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:48.109 20:37:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:48.109 20:37:05 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:48.109 20:37:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.109 20:37:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:48.109 20:37:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.109 20:37:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:48.109 Found net devices under 0000:27:00.0: cvl_0_0 00:19:48.109 20:37:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.109 20:37:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:48.109 20:37:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.109 20:37:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:48.109 20:37:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.109 20:37:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:48.109 Found net devices under 0000:27:00.1: cvl_0_1 00:19:48.109 20:37:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.109 20:37:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:48.109 20:37:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:48.109 20:37:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:48.109 20:37:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:48.110 20:37:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:48.110 20:37:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.110 20:37:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.110 20:37:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.110 20:37:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:48.110 20:37:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.110 20:37:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.110 20:37:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:48.110 20:37:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.110 20:37:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.110 20:37:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:48.110 20:37:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:48.110 20:37:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.110 20:37:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.110 20:37:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.110 20:37:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.110 20:37:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:48.110 20:37:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.110 20:37:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.110 20:37:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.110 20:37:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:48.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:19:48.110 00:19:48.110 --- 10.0.0.2 ping statistics --- 00:19:48.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.110 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:19:48.110 20:37:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:19:48.110 00:19:48.110 --- 10.0.0.1 ping statistics --- 00:19:48.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.110 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:19:48.110 20:37:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.110 20:37:05 -- nvmf/common.sh@410 -- # return 0 00:19:48.110 20:37:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:48.110 20:37:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.110 20:37:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:48.110 20:37:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:48.110 20:37:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.110 20:37:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:48.110 20:37:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:48.110 20:37:05 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:19:48.110 20:37:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:48.110 20:37:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:48.110 20:37:05 -- common/autotest_common.sh@10 -- # set +x 00:19:48.110 20:37:05 -- nvmf/common.sh@469 -- # nvmfpid=3522896 00:19:48.110 20:37:05 -- nvmf/common.sh@470 -- # waitforlisten 3522896 00:19:48.110 20:37:05 -- common/autotest_common.sh@819 -- # '[' -z 3522896 ']' 00:19:48.110 20:37:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.110 20:37:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:48.110 20:37:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.110 20:37:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:48.110 20:37:05 -- common/autotest_common.sh@10 -- # set +x 00:19:48.110 20:37:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:48.110 [2024-04-26 20:37:05.780773] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:48.110 [2024-04-26 20:37:05.780846] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.110 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.110 [2024-04-26 20:37:05.876601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:48.110 [2024-04-26 20:37:05.981719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:48.110 [2024-04-26 20:37:05.981910] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.110 [2024-04-26 20:37:05.981926] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.110 [2024-04-26 20:37:05.981936] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.110 [2024-04-26 20:37:05.982082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.110 [2024-04-26 20:37:05.982230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.110 [2024-04-26 20:37:05.982239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.367 20:37:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:48.368 20:37:06 -- common/autotest_common.sh@852 -- # return 0 00:19:48.368 20:37:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:48.368 20:37:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:48.368 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.368 20:37:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.368 20:37:06 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:19:48.368 20:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.368 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.368 [2024-04-26 20:37:06.537822] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.368 20:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.368 20:37:06 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:19:48.368 20:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.368 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.368 Malloc0 00:19:48.368 20:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.368 20:37:06 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:48.368 20:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.368 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.368 Delay0 00:19:48.368 20:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.368 20:37:06 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:48.368 20:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.368 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.368 20:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.368 20:37:06 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:19:48.368 20:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.368 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.368 20:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.368 20:37:06 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:48.368 20:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.368 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.368 [2024-04-26 20:37:06.624492] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.368 20:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.368 20:37:06 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:48.368 20:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.368 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.368 20:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.368 20:37:06 -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:19:48.368 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.626 [2024-04-26 20:37:06.768389] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:51.160 Initializing NVMe Controllers 00:19:51.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:51.160 controller IO queue size 128 less than required 00:19:51.160 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:19:51.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:51.160 Initialization complete. Launching workers. 00:19:51.160 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 47579 00:19:51.160 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47640, failed to submit 62 00:19:51.160 success 47579, unsuccess 61, failed 0 00:19:51.160 20:37:08 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:51.160 20:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.160 20:37:08 -- common/autotest_common.sh@10 -- # set +x 00:19:51.160 20:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.161 20:37:08 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:51.161 20:37:08 -- target/abort.sh@38 -- # nvmftestfini 00:19:51.161 20:37:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:51.161 20:37:08 -- nvmf/common.sh@116 -- # sync 00:19:51.161 20:37:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:51.161 20:37:08 -- nvmf/common.sh@119 -- # set +e 00:19:51.161 20:37:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:51.161 20:37:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:51.161 rmmod nvme_tcp 00:19:51.161 rmmod nvme_fabrics 00:19:51.161 rmmod nvme_keyring 00:19:51.161 20:37:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:51.161 20:37:09 -- nvmf/common.sh@123 -- # set -e 00:19:51.161 20:37:09 -- nvmf/common.sh@124 -- # return 0 00:19:51.161 20:37:09 -- nvmf/common.sh@477 -- # '[' -n 3522896 ']' 00:19:51.161 20:37:09 -- nvmf/common.sh@478 -- # killprocess 3522896 00:19:51.161 20:37:09 -- common/autotest_common.sh@926 -- # '[' -z 3522896 ']' 00:19:51.161 20:37:09 -- common/autotest_common.sh@930 -- # kill -0 3522896 00:19:51.161 20:37:09 -- common/autotest_common.sh@931 -- # uname 00:19:51.161 20:37:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:51.161 20:37:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3522896 00:19:51.161 20:37:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:51.161 20:37:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:51.161 20:37:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3522896' 00:19:51.161 killing process with pid 3522896 00:19:51.161 20:37:09 -- common/autotest_common.sh@945 -- # kill 3522896 00:19:51.161 20:37:09 -- common/autotest_common.sh@950 -- # wait 3522896 00:19:51.422 20:37:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:51.422 20:37:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:51.422 20:37:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:51.422 20:37:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.422 20:37:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:51.422 20:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.422 20:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.422 20:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.335 20:37:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:53.335 00:19:53.335 real 0m10.987s 00:19:53.335 user 0m13.967s 00:19:53.335 sys 0m4.303s 00:19:53.335 20:37:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.335 20:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:53.335 ************************************ 00:19:53.335 END TEST nvmf_abort 00:19:53.335 ************************************ 00:19:53.336 20:37:11 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:53.336 20:37:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:53.336 20:37:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:53.336 20:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:53.336 ************************************ 00:19:53.336 START TEST nvmf_ns_hotplug_stress 00:19:53.336 ************************************ 00:19:53.336 20:37:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:53.596 * Looking for test storage... 00:19:53.596 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:53.596 20:37:11 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.596 20:37:11 -- nvmf/common.sh@7 -- # uname -s 00:19:53.596 20:37:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.596 20:37:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.596 20:37:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.596 20:37:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.596 20:37:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.596 20:37:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.596 20:37:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.596 20:37:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.596 20:37:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.596 20:37:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.596 20:37:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:53.596 20:37:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:53.596 20:37:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.596 20:37:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.596 20:37:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:53.596 20:37:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:53.596 20:37:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.596 20:37:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.596 20:37:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.596 20:37:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.597 20:37:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.597 20:37:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.597 20:37:11 -- paths/export.sh@5 -- # export PATH 00:19:53.597 20:37:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.597 20:37:11 -- nvmf/common.sh@46 -- # : 0 00:19:53.597 20:37:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:53.597 20:37:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:53.597 20:37:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:53.597 20:37:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.597 20:37:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.597 20:37:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:53.597 20:37:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:53.597 20:37:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:53.597 20:37:11 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:53.597 20:37:11 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:19:53.597 20:37:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:53.597 20:37:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.597 20:37:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:53.597 20:37:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:53.597 20:37:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:53.597 20:37:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.597 20:37:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.597 20:37:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.597 20:37:11 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:53.597 20:37:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:53.597 20:37:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:53.597 20:37:11 -- common/autotest_common.sh@10 -- # set +x 00:20:00.186 20:37:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:00.186 20:37:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:00.186 20:37:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:00.186 20:37:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:00.186 20:37:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:00.186 20:37:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:00.186 20:37:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:00.186 20:37:17 -- nvmf/common.sh@294 -- # net_devs=() 00:20:00.186 20:37:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:00.186 20:37:17 -- nvmf/common.sh@295 -- # e810=() 00:20:00.186 20:37:17 -- nvmf/common.sh@295 -- # local -ga e810 00:20:00.186 20:37:17 -- nvmf/common.sh@296 -- # x722=() 00:20:00.186 20:37:17 -- nvmf/common.sh@296 -- # local -ga x722 00:20:00.186 20:37:17 -- nvmf/common.sh@297 -- # mlx=() 00:20:00.186 20:37:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:00.186 20:37:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.186 20:37:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:00.186 20:37:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:00.186 20:37:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.186 20:37:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:00.186 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:00.186 20:37:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.186 20:37:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:00.186 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:00.186 20:37:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:00.186 20:37:17 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.186 20:37:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.186 20:37:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.186 20:37:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.186 20:37:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:00.186 Found net devices under 0000:27:00.0: cvl_0_0 00:20:00.186 20:37:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.186 20:37:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.186 20:37:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.186 20:37:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.186 20:37:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.186 20:37:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:00.186 Found net devices under 0000:27:00.1: cvl_0_1 00:20:00.186 20:37:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.186 20:37:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:00.186 20:37:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:00.186 20:37:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:00.186 20:37:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:00.186 20:37:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.186 20:37:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.186 20:37:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.186 20:37:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:00.186 20:37:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.186 20:37:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.186 20:37:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:00.186 20:37:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.186 20:37:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.186 20:37:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:00.186 20:37:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:00.186 20:37:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.186 20:37:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.186 20:37:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.186 20:37:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.186 20:37:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:00.186 20:37:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.186 20:37:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.186 20:37:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.186 20:37:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:00.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:20:00.186 00:20:00.186 --- 10.0.0.2 ping statistics --- 00:20:00.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.187 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:20:00.187 20:37:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:20:00.187 00:20:00.187 --- 10.0.0.1 ping statistics --- 00:20:00.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.187 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:20:00.187 20:37:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.187 20:37:18 -- nvmf/common.sh@410 -- # return 0 00:20:00.187 20:37:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:00.187 20:37:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.187 20:37:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:00.187 20:37:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:00.187 20:37:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.187 20:37:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:00.187 20:37:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:00.187 20:37:18 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:20:00.187 20:37:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:00.187 20:37:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:00.187 20:37:18 -- common/autotest_common.sh@10 -- # set +x 00:20:00.187 20:37:18 -- nvmf/common.sh@469 -- # nvmfpid=3527728 00:20:00.187 20:37:18 -- nvmf/common.sh@470 -- # waitforlisten 3527728 00:20:00.187 20:37:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:00.187 20:37:18 -- common/autotest_common.sh@819 -- # '[' -z 3527728 ']' 00:20:00.187 20:37:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.187 20:37:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.187 20:37:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.187 20:37:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.187 20:37:18 -- common/autotest_common.sh@10 -- # set +x 00:20:00.187 [2024-04-26 20:37:18.226641] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:00.187 [2024-04-26 20:37:18.226770] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.187 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.187 [2024-04-26 20:37:18.367004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:00.187 [2024-04-26 20:37:18.470715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:00.187 [2024-04-26 20:37:18.470934] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.187 [2024-04-26 20:37:18.470949] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.187 [2024-04-26 20:37:18.470959] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.187 [2024-04-26 20:37:18.471029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.187 [2024-04-26 20:37:18.474412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.187 [2024-04-26 20:37:18.474415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.758 20:37:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:00.758 20:37:18 -- common/autotest_common.sh@852 -- # return 0 00:20:00.758 20:37:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:00.758 20:37:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:00.758 20:37:18 -- common/autotest_common.sh@10 -- # set +x 00:20:00.758 20:37:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.758 20:37:18 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:20:00.758 20:37:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:01.020 [2024-04-26 20:37:19.114216] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.020 20:37:19 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:01.020 20:37:19 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.279 [2024-04-26 20:37:19.437530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.279 20:37:19 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:01.279 20:37:19 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:20:01.539 Malloc0 00:20:01.539 20:37:19 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:01.539 Delay0 00:20:01.798 20:37:19 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:01.799 20:37:20 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:20:02.059 NULL1 00:20:02.059 20:37:20 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:02.059 20:37:20 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3528282 00:20:02.059 20:37:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:02.059 20:37:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:02.059 20:37:20 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:20:02.318 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.318 20:37:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:02.318 20:37:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:20:02.318 20:37:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:20:02.578 [2024-04-26 20:37:20.762911] bdev.c:4963:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:20:02.578 true 00:20:02.578 20:37:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:02.578 20:37:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:02.837 20:37:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:02.837 20:37:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:20:02.837 20:37:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:20:03.095 true 00:20:03.095 20:37:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:03.095 20:37:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.095 20:37:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:03.354 20:37:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:20:03.354 20:37:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:20:03.354 true 00:20:03.354 20:37:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:03.354 20:37:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.613 20:37:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:03.613 20:37:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:20:03.613 20:37:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:20:03.871 true 00:20:03.871 20:37:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:03.871 20:37:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.871 20:37:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.130 20:37:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:20:04.130 20:37:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:20:04.130 true 00:20:04.130 20:37:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:04.130 20:37:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:04.389 20:37:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.649 20:37:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:20:04.649 20:37:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:20:04.649 true 00:20:04.649 20:37:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:04.649 20:37:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:04.909 20:37:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.909 20:37:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:20:04.909 20:37:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:20:05.167 true 00:20:05.167 20:37:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:05.167 20:37:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.167 20:37:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:05.424 20:37:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:20:05.424 20:37:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:20:05.424 true 00:20:05.424 20:37:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:05.424 20:37:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.681 20:37:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:05.681 20:37:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:20:05.681 20:37:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:20:05.938 true 00:20:05.938 20:37:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:05.938 20:37:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.938 20:37:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:06.196 20:37:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:20:06.196 20:37:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:20:06.196 true 00:20:06.196 20:37:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:06.196 20:37:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.454 20:37:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:06.712 20:37:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:20:06.712 20:37:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:20:06.712 true 00:20:06.712 20:37:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:06.712 20:37:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.971 20:37:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:06.971 20:37:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:20:06.971 20:37:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:20:07.229 true 00:20:07.229 20:37:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:07.230 20:37:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.230 20:37:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:07.487 20:37:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:20:07.487 20:37:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:20:07.487 true 00:20:07.487 20:37:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:07.487 20:37:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.744 20:37:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:07.744 20:37:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:20:07.744 20:37:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:20:08.002 true 00:20:08.002 20:37:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:08.002 20:37:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.260 20:37:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.260 20:37:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:20:08.260 20:37:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:20:08.519 true 00:20:08.519 20:37:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:08.519 20:37:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.519 20:37:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.778 20:37:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:20:08.778 20:37:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:20:08.778 true 00:20:08.778 20:37:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:08.778 20:37:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.038 20:37:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:09.038 20:37:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:20:09.038 20:37:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:20:09.297 true 00:20:09.297 20:37:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:09.297 20:37:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.557 20:37:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:09.557 20:37:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:20:09.557 20:37:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:20:09.816 true 00:20:09.816 20:37:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:09.816 20:37:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.816 20:37:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:10.081 20:37:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:20:10.081 20:37:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:20:10.081 true 00:20:10.081 20:37:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:10.081 20:37:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:10.340 20:37:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:10.599 20:37:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:20:10.599 20:37:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:20:10.599 true 00:20:10.599 20:37:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:10.599 20:37:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:10.858 20:37:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:10.858 20:37:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:20:10.858 20:37:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:20:11.117 true 00:20:11.117 20:37:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:11.118 20:37:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:11.118 20:37:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:11.378 20:37:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:20:11.378 20:37:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:20:11.637 true 00:20:11.638 20:37:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:11.638 20:37:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:11.638 20:37:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:11.896 20:37:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:20:11.896 20:37:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:20:11.896 true 00:20:11.896 20:37:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:11.896 20:37:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:12.155 20:37:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:12.155 20:37:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:20:12.155 20:37:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:20:12.413 true 00:20:12.413 20:37:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:12.413 20:37:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:12.413 20:37:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:12.672 20:37:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:20:12.672 20:37:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:20:12.932 true 00:20:12.932 20:37:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:12.932 20:37:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:12.932 20:37:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:13.193 20:37:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:20:13.193 20:37:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:20:13.193 true 00:20:13.193 20:37:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:13.193 20:37:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:13.452 20:37:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:13.711 20:37:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:20:13.711 20:37:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:20:13.711 true 00:20:13.711 20:37:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:13.711 20:37:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:14.000 20:37:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:14.000 20:37:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:20:14.000 20:37:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:20:14.303 true 00:20:14.303 20:37:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:14.303 20:37:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:14.303 20:37:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:14.564 20:37:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:20:14.564 20:37:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:20:14.564 true 00:20:14.564 20:37:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:14.564 20:37:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:14.825 20:37:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:14.825 20:37:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:20:14.825 20:37:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:20:15.089 true 00:20:15.089 20:37:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:15.089 20:37:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:15.089 20:37:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:15.349 20:37:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:20:15.349 20:37:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:20:15.606 true 00:20:15.606 20:37:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:15.606 20:37:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:15.606 20:37:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:15.865 20:37:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:20:15.865 20:37:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:20:15.865 true 00:20:15.865 20:37:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:15.865 20:37:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:16.124 20:37:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:16.384 20:37:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:20:16.384 20:37:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:20:16.384 true 00:20:16.384 20:37:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:16.384 20:37:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:16.644 20:37:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:16.644 20:37:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:20:16.644 20:37:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:20:16.902 true 00:20:16.902 20:37:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:16.902 20:37:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:17.160 20:37:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:17.160 20:37:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:20:17.160 20:37:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:20:17.420 true 00:20:17.420 20:37:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:17.420 20:37:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:17.420 20:37:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:17.680 20:37:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:20:17.680 20:37:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:20:17.680 true 00:20:17.680 20:37:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:17.680 20:37:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:17.939 20:37:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:18.197 20:37:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:20:18.197 20:37:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:20:18.197 true 00:20:18.197 20:37:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:18.197 20:37:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:18.456 20:37:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:18.456 20:37:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:20:18.456 20:37:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:20:18.716 true 00:20:18.716 20:37:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:18.716 20:37:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:18.976 20:37:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:18.976 20:37:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:20:18.976 20:37:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:20:19.236 true 00:20:19.236 20:37:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:19.237 20:37:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:19.237 20:37:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:19.497 20:37:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:20:19.497 20:37:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:20:19.497 true 00:20:19.497 20:37:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:19.497 20:37:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:19.756 20:37:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:19.756 20:37:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:20:19.756 20:37:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:20:20.013 true 00:20:20.013 20:37:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:20.013 20:37:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.272 20:37:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:20.272 20:37:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:20:20.272 20:37:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:20:20.533 true 00:20:20.533 20:37:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:20.533 20:37:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.533 20:37:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:20.792 20:37:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:20:20.792 20:37:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:20:20.792 true 00:20:21.052 20:37:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:21.052 20:37:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:21.052 20:37:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:21.312 20:37:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:20:21.312 20:37:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:20:21.312 true 00:20:21.312 20:37:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:21.312 20:37:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:21.572 20:37:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:21.832 20:37:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:20:21.832 20:37:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:20:21.832 true 00:20:21.832 20:37:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:21.832 20:37:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:22.093 20:37:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:22.093 20:37:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:20:22.093 20:37:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:20:22.353 true 00:20:22.353 20:37:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:22.353 20:37:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:22.353 20:37:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:22.612 20:37:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:20:22.612 20:37:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:20:22.612 true 00:20:22.872 20:37:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:22.872 20:37:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:22.872 20:37:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:23.131 20:37:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:20:23.131 20:37:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:20:23.131 true 00:20:23.131 20:37:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:23.131 20:37:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:23.391 20:37:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:23.391 20:37:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:20:23.391 20:37:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:20:23.649 true 00:20:23.649 20:37:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:23.649 20:37:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:23.907 20:37:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:23.907 20:37:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:20:23.907 20:37:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:20:24.165 true 00:20:24.165 20:37:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:24.165 20:37:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:24.165 20:37:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:24.423 20:37:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:20:24.423 20:37:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:20:24.423 true 00:20:24.423 20:37:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:24.423 20:37:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:24.680 20:37:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:24.680 20:37:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:20:24.680 20:37:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:20:24.940 true 00:20:24.940 20:37:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:24.940 20:37:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:24.940 20:37:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:25.200 20:37:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:20:25.200 20:37:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:20:25.458 true 00:20:25.458 20:37:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:25.458 20:37:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:25.458 20:37:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:25.716 20:37:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:20:25.716 20:37:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:20:25.716 true 00:20:25.716 20:37:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:25.716 20:37:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:25.973 20:37:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:25.973 20:37:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:20:25.973 20:37:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:20:26.230 true 00:20:26.230 20:37:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:26.230 20:37:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:26.490 20:37:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:26.490 20:37:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:20:26.490 20:37:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:20:26.749 true 00:20:26.749 20:37:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:26.749 20:37:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:26.750 20:37:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:27.009 20:37:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:20:27.009 20:37:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:20:27.009 true 00:20:27.009 20:37:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:27.009 20:37:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:27.268 20:37:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:27.526 20:37:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:20:27.526 20:37:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:20:27.526 true 00:20:27.526 20:37:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:27.526 20:37:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:27.784 20:37:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:27.784 20:37:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:20:27.784 20:37:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:20:28.041 true 00:20:28.041 20:37:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:28.041 20:37:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:28.301 20:37:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:28.301 20:37:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:20:28.301 20:37:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:20:28.562 true 00:20:28.562 20:37:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:28.562 20:37:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:28.562 20:37:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:28.823 20:37:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1061 00:20:28.823 20:37:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:20:28.823 true 00:20:28.823 20:37:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:28.823 20:37:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:29.083 20:37:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:29.341 20:37:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1062 00:20:29.341 20:37:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:20:29.341 true 00:20:29.341 20:37:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:29.341 20:37:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:29.600 20:37:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:29.600 20:37:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1063 00:20:29.600 20:37:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:20:29.858 true 00:20:29.858 20:37:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:29.858 20:37:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:29.858 20:37:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:30.119 20:37:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1064 00:20:30.119 20:37:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:20:30.119 true 00:20:30.379 20:37:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:30.379 20:37:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:30.379 20:37:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:30.639 20:37:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1065 00:20:30.639 20:37:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:20:30.639 true 00:20:30.639 20:37:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:30.639 20:37:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:30.898 20:37:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:31.156 20:37:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1066 00:20:31.156 20:37:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1066 00:20:31.156 true 00:20:31.156 20:37:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:31.156 20:37:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:31.414 20:37:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:31.414 20:37:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1067 00:20:31.414 20:37:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1067 00:20:31.726 true 00:20:31.726 20:37:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:31.726 20:37:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:31.726 20:37:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:31.986 20:37:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1068 00:20:31.986 20:37:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1068 00:20:31.986 true 00:20:31.986 20:37:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:31.986 20:37:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:32.245 20:37:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:32.245 Initializing NVMe Controllers 00:20:32.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.245 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:20:32.245 Controller IO queue size 128, less than required. 00:20:32.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.245 WARNING: Some requested NVMe devices were skipped 00:20:32.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:32.245 Initialization complete. Launching workers. 00:20:32.245 ======================================================== 00:20:32.245 Latency(us) 00:20:32.245 Device Information : IOPS MiB/s Average min max 00:20:32.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31059.17 15.17 4121.06 1472.86 43640.85 00:20:32.245 ======================================================== 00:20:32.245 Total : 31059.17 15.17 4121.06 1472.86 43640.85 00:20:32.245 00:20:32.245 20:37:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1069 00:20:32.245 20:37:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1069 00:20:32.505 true 00:20:32.505 20:37:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3528282 00:20:32.505 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3528282) - No such process 00:20:32.505 20:37:50 -- target/ns_hotplug_stress.sh@44 -- # wait 3528282 00:20:32.505 20:37:50 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:32.505 20:37:50 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:20:32.505 20:37:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:32.505 20:37:50 -- nvmf/common.sh@116 -- # sync 00:20:32.505 20:37:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:32.505 20:37:50 -- nvmf/common.sh@119 -- # set +e 00:20:32.505 20:37:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:32.505 20:37:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:32.505 rmmod nvme_tcp 00:20:32.505 rmmod nvme_fabrics 00:20:32.505 rmmod nvme_keyring 00:20:32.762 20:37:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:32.762 20:37:50 -- nvmf/common.sh@123 -- # set -e 00:20:32.762 20:37:50 -- nvmf/common.sh@124 -- # return 0 00:20:32.762 20:37:50 -- nvmf/common.sh@477 -- # '[' -n 3527728 ']' 00:20:32.762 20:37:50 -- nvmf/common.sh@478 -- # killprocess 3527728 00:20:32.762 20:37:50 -- common/autotest_common.sh@926 -- # '[' -z 3527728 ']' 00:20:32.762 20:37:50 -- common/autotest_common.sh@930 -- # kill -0 3527728 00:20:32.762 20:37:50 -- common/autotest_common.sh@931 -- # uname 00:20:32.762 20:37:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:32.762 20:37:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3527728 00:20:32.762 20:37:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:32.762 20:37:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:32.762 20:37:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3527728' 00:20:32.762 killing process with pid 3527728 00:20:32.762 20:37:50 -- common/autotest_common.sh@945 -- # kill 3527728 00:20:32.762 20:37:50 -- common/autotest_common.sh@950 -- # wait 3527728 00:20:33.327 20:37:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:33.327 20:37:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:33.327 20:37:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:33.327 20:37:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.327 20:37:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:33.327 20:37:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.327 20:37:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.327 20:37:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.233 20:37:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:35.233 00:20:35.233 real 0m41.775s 00:20:35.233 user 2m34.082s 00:20:35.233 sys 0m12.101s 00:20:35.233 20:37:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.233 20:37:53 -- common/autotest_common.sh@10 -- # set +x 00:20:35.233 ************************************ 00:20:35.233 END TEST nvmf_ns_hotplug_stress 00:20:35.233 ************************************ 00:20:35.233 20:37:53 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:35.233 20:37:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:35.233 20:37:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:35.233 20:37:53 -- common/autotest_common.sh@10 -- # set +x 00:20:35.233 ************************************ 00:20:35.233 START TEST nvmf_connect_stress 00:20:35.233 ************************************ 00:20:35.233 20:37:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:35.233 * Looking for test storage... 00:20:35.233 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:35.233 20:37:53 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.233 20:37:53 -- nvmf/common.sh@7 -- # uname -s 00:20:35.233 20:37:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.233 20:37:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.233 20:37:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.233 20:37:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.233 20:37:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.233 20:37:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.233 20:37:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.233 20:37:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.233 20:37:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.233 20:37:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.233 20:37:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:35.233 20:37:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:20:35.233 20:37:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.233 20:37:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.233 20:37:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:35.233 20:37:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:35.233 20:37:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.233 20:37:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.233 20:37:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.233 20:37:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.233 20:37:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.233 20:37:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.233 20:37:53 -- paths/export.sh@5 -- # export PATH 00:20:35.233 20:37:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.233 20:37:53 -- nvmf/common.sh@46 -- # : 0 00:20:35.233 20:37:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:35.233 20:37:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:35.233 20:37:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:35.233 20:37:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.233 20:37:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.233 20:37:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:35.233 20:37:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:35.233 20:37:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:35.233 20:37:53 -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:35.233 20:37:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:35.233 20:37:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.233 20:37:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:35.233 20:37:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:35.233 20:37:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:35.233 20:37:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.233 20:37:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.233 20:37:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.233 20:37:53 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:35.233 20:37:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:35.233 20:37:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:35.233 20:37:53 -- common/autotest_common.sh@10 -- # set +x 00:20:40.507 20:37:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:40.507 20:37:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:40.507 20:37:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:40.507 20:37:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:40.507 20:37:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:40.507 20:37:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:40.507 20:37:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:40.507 20:37:58 -- nvmf/common.sh@294 -- # net_devs=() 00:20:40.507 20:37:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:40.507 20:37:58 -- nvmf/common.sh@295 -- # e810=() 00:20:40.507 20:37:58 -- nvmf/common.sh@295 -- # local -ga e810 00:20:40.507 20:37:58 -- nvmf/common.sh@296 -- # x722=() 00:20:40.507 20:37:58 -- nvmf/common.sh@296 -- # local -ga x722 00:20:40.507 20:37:58 -- nvmf/common.sh@297 -- # mlx=() 00:20:40.507 20:37:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:40.507 20:37:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.507 20:37:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:40.507 20:37:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:40.507 20:37:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.507 20:37:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:40.507 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:40.507 20:37:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.507 20:37:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:40.507 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:40.507 20:37:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:40.507 20:37:58 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.507 20:37:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.507 20:37:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.507 20:37:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.507 20:37:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:40.507 Found net devices under 0000:27:00.0: cvl_0_0 00:20:40.507 20:37:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.507 20:37:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.507 20:37:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.507 20:37:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.507 20:37:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.507 20:37:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:40.507 Found net devices under 0000:27:00.1: cvl_0_1 00:20:40.507 20:37:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.507 20:37:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:40.507 20:37:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:40.507 20:37:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:40.507 20:37:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:40.507 20:37:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.507 20:37:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.507 20:37:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.507 20:37:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:40.508 20:37:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.508 20:37:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.508 20:37:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:40.508 20:37:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.508 20:37:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.508 20:37:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:40.508 20:37:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:40.508 20:37:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.508 20:37:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.767 20:37:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.767 20:37:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.767 20:37:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:40.767 20:37:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.767 20:37:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.767 20:37:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.767 20:37:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:40.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:20:40.767 00:20:40.767 --- 10.0.0.2 ping statistics --- 00:20:40.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.767 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:40.767 20:37:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:20:40.767 00:20:40.767 --- 10.0.0.1 ping statistics --- 00:20:40.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.767 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:20:40.767 20:37:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.767 20:37:59 -- nvmf/common.sh@410 -- # return 0 00:20:40.767 20:37:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.767 20:37:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.767 20:37:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:40.767 20:37:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:40.767 20:37:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.767 20:37:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:40.767 20:37:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:40.767 20:37:59 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:40.767 20:37:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:40.767 20:37:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:40.768 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:20:40.768 20:37:59 -- nvmf/common.sh@469 -- # nvmfpid=3538466 00:20:40.768 20:37:59 -- nvmf/common.sh@470 -- # waitforlisten 3538466 00:20:40.768 20:37:59 -- common/autotest_common.sh@819 -- # '[' -z 3538466 ']' 00:20:40.768 20:37:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.768 20:37:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:40.768 20:37:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.768 20:37:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:40.768 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:20:40.768 20:37:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:41.052 [2024-04-26 20:37:59.136202] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:41.052 [2024-04-26 20:37:59.136305] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.052 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.052 [2024-04-26 20:37:59.258895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:41.052 [2024-04-26 20:37:59.356940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:41.052 [2024-04-26 20:37:59.357118] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.052 [2024-04-26 20:37:59.357131] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.052 [2024-04-26 20:37:59.357139] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.052 [2024-04-26 20:37:59.357289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.052 [2024-04-26 20:37:59.357410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.052 [2024-04-26 20:37:59.357418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.640 20:37:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:41.640 20:37:59 -- common/autotest_common.sh@852 -- # return 0 00:20:41.640 20:37:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:41.640 20:37:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:41.640 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:20:41.640 20:37:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.640 20:37:59 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.640 20:37:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:41.640 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:20:41.640 [2024-04-26 20:37:59.868813] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.640 20:37:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:41.640 20:37:59 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:41.640 20:37:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:41.640 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:20:41.640 20:37:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:41.640 20:37:59 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.640 20:37:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:41.640 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:20:41.640 [2024-04-26 20:37:59.908184] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.640 20:37:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:41.640 20:37:59 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:41.640 20:37:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:41.640 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:20:41.640 NULL1 00:20:41.640 20:37:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:41.640 20:37:59 -- target/connect_stress.sh@21 -- # PERF_PID=3538781 00:20:41.640 20:37:59 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:41.640 20:37:59 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:41.640 20:37:59 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # seq 1 20 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.640 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.640 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.900 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.900 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.900 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.900 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.900 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.900 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.900 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.900 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.900 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.900 20:37:59 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.900 20:37:59 -- target/connect_stress.sh@28 -- # cat 00:20:41.900 20:38:00 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.900 20:38:00 -- target/connect_stress.sh@28 -- # cat 00:20:41.900 20:38:00 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:41.900 20:38:00 -- target/connect_stress.sh@28 -- # cat 00:20:41.900 20:38:00 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:41.900 20:38:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:41.900 20:38:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:41.900 20:38:00 -- common/autotest_common.sh@10 -- # set +x 00:20:42.159 20:38:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:42.159 20:38:00 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:42.159 20:38:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:42.159 20:38:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:42.159 20:38:00 -- common/autotest_common.sh@10 -- # set +x 00:20:42.418 20:38:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:42.418 20:38:00 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:42.418 20:38:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:42.418 20:38:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:42.418 20:38:00 -- common/autotest_common.sh@10 -- # set +x 00:20:42.677 20:38:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:42.677 20:38:00 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:42.677 20:38:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:42.677 20:38:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:42.677 20:38:00 -- common/autotest_common.sh@10 -- # set +x 00:20:43.247 20:38:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:43.247 20:38:01 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:43.247 20:38:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:43.247 20:38:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:43.247 20:38:01 -- common/autotest_common.sh@10 -- # set +x 00:20:43.508 20:38:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:43.508 20:38:01 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:43.508 20:38:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:43.508 20:38:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:43.508 20:38:01 -- common/autotest_common.sh@10 -- # set +x 00:20:43.766 20:38:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:43.766 20:38:01 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:43.766 20:38:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:43.766 20:38:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:43.766 20:38:01 -- common/autotest_common.sh@10 -- # set +x 00:20:44.025 20:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:44.025 20:38:02 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:44.025 20:38:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:44.025 20:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:44.025 20:38:02 -- common/autotest_common.sh@10 -- # set +x 00:20:44.283 20:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:44.283 20:38:02 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:44.283 20:38:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:44.283 20:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:44.283 20:38:02 -- common/autotest_common.sh@10 -- # set +x 00:20:44.854 20:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:44.854 20:38:02 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:44.854 20:38:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:44.854 20:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:44.854 20:38:02 -- common/autotest_common.sh@10 -- # set +x 00:20:45.115 20:38:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:45.115 20:38:03 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:45.115 20:38:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:45.115 20:38:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:45.115 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:20:45.376 20:38:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:45.376 20:38:03 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:45.376 20:38:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:45.376 20:38:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:45.376 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:20:45.635 20:38:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:45.635 20:38:03 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:45.635 20:38:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:45.635 20:38:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:45.635 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:20:45.896 20:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:45.896 20:38:04 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:45.896 20:38:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:45.896 20:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:45.896 20:38:04 -- common/autotest_common.sh@10 -- # set +x 00:20:46.464 20:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:46.464 20:38:04 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:46.464 20:38:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:46.464 20:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:46.464 20:38:04 -- common/autotest_common.sh@10 -- # set +x 00:20:46.724 20:38:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:46.724 20:38:04 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:46.724 20:38:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:46.724 20:38:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:46.724 20:38:04 -- common/autotest_common.sh@10 -- # set +x 00:20:46.984 20:38:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:46.984 20:38:05 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:46.984 20:38:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:46.984 20:38:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:46.984 20:38:05 -- common/autotest_common.sh@10 -- # set +x 00:20:47.244 20:38:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:47.244 20:38:05 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:47.244 20:38:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:47.244 20:38:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:47.244 20:38:05 -- common/autotest_common.sh@10 -- # set +x 00:20:47.502 20:38:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:47.502 20:38:05 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:47.502 20:38:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:47.502 20:38:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:47.502 20:38:05 -- common/autotest_common.sh@10 -- # set +x 00:20:48.070 20:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:48.070 20:38:06 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:48.070 20:38:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.070 20:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:48.070 20:38:06 -- common/autotest_common.sh@10 -- # set +x 00:20:48.330 20:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:48.330 20:38:06 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:48.330 20:38:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.330 20:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:48.330 20:38:06 -- common/autotest_common.sh@10 -- # set +x 00:20:48.592 20:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:48.592 20:38:06 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:48.592 20:38:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.592 20:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:48.592 20:38:06 -- common/autotest_common.sh@10 -- # set +x 00:20:48.851 20:38:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:48.851 20:38:07 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:48.851 20:38:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.851 20:38:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:48.851 20:38:07 -- common/autotest_common.sh@10 -- # set +x 00:20:49.110 20:38:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.110 20:38:07 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:49.110 20:38:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.110 20:38:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.110 20:38:07 -- common/autotest_common.sh@10 -- # set +x 00:20:49.675 20:38:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.675 20:38:07 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:49.675 20:38:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.675 20:38:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.675 20:38:07 -- common/autotest_common.sh@10 -- # set +x 00:20:49.934 20:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.934 20:38:08 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:49.934 20:38:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.934 20:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.934 20:38:08 -- common/autotest_common.sh@10 -- # set +x 00:20:50.194 20:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.194 20:38:08 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:50.194 20:38:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:50.194 20:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.194 20:38:08 -- common/autotest_common.sh@10 -- # set +x 00:20:50.454 20:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.454 20:38:08 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:50.454 20:38:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:50.454 20:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.454 20:38:08 -- common/autotest_common.sh@10 -- # set +x 00:20:50.714 20:38:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.714 20:38:08 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:50.714 20:38:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:50.714 20:38:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.714 20:38:08 -- common/autotest_common.sh@10 -- # set +x 00:20:51.281 20:38:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:51.281 20:38:09 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:51.281 20:38:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.281 20:38:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:51.281 20:38:09 -- common/autotest_common.sh@10 -- # set +x 00:20:51.540 20:38:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:51.540 20:38:09 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:51.540 20:38:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.540 20:38:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:51.540 20:38:09 -- common/autotest_common.sh@10 -- # set +x 00:20:51.800 20:38:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:51.800 20:38:09 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:51.800 20:38:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.800 20:38:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:51.800 20:38:09 -- common/autotest_common.sh@10 -- # set +x 00:20:51.800 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.058 20:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:52.058 20:38:10 -- target/connect_stress.sh@34 -- # kill -0 3538781 00:20:52.059 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3538781) - No such process 00:20:52.059 20:38:10 -- target/connect_stress.sh@38 -- # wait 3538781 00:20:52.059 20:38:10 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:52.059 20:38:10 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:52.059 20:38:10 -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:52.059 20:38:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:52.059 20:38:10 -- nvmf/common.sh@116 -- # sync 00:20:52.059 20:38:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:52.059 20:38:10 -- nvmf/common.sh@119 -- # set +e 00:20:52.059 20:38:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:52.059 20:38:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:52.059 rmmod nvme_tcp 00:20:52.059 rmmod nvme_fabrics 00:20:52.059 rmmod nvme_keyring 00:20:52.059 20:38:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:52.059 20:38:10 -- nvmf/common.sh@123 -- # set -e 00:20:52.059 20:38:10 -- nvmf/common.sh@124 -- # return 0 00:20:52.059 20:38:10 -- nvmf/common.sh@477 -- # '[' -n 3538466 ']' 00:20:52.059 20:38:10 -- nvmf/common.sh@478 -- # killprocess 3538466 00:20:52.059 20:38:10 -- common/autotest_common.sh@926 -- # '[' -z 3538466 ']' 00:20:52.059 20:38:10 -- common/autotest_common.sh@930 -- # kill -0 3538466 00:20:52.059 20:38:10 -- common/autotest_common.sh@931 -- # uname 00:20:52.059 20:38:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:52.059 20:38:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3538466 00:20:52.319 20:38:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:52.319 20:38:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:52.319 20:38:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3538466' 00:20:52.319 killing process with pid 3538466 00:20:52.319 20:38:10 -- common/autotest_common.sh@945 -- # kill 3538466 00:20:52.319 20:38:10 -- common/autotest_common.sh@950 -- # wait 3538466 00:20:52.580 20:38:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:52.580 20:38:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:52.580 20:38:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:52.580 20:38:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.580 20:38:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:52.580 20:38:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.580 20:38:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.580 20:38:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.125 20:38:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:55.125 00:20:55.125 real 0m19.459s 00:20:55.125 user 0m43.885s 00:20:55.125 sys 0m5.835s 00:20:55.125 20:38:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.125 20:38:12 -- common/autotest_common.sh@10 -- # set +x 00:20:55.125 ************************************ 00:20:55.125 END TEST nvmf_connect_stress 00:20:55.125 ************************************ 00:20:55.125 20:38:12 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:55.125 20:38:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:55.125 20:38:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:55.125 20:38:12 -- common/autotest_common.sh@10 -- # set +x 00:20:55.125 ************************************ 00:20:55.125 START TEST nvmf_fused_ordering 00:20:55.125 ************************************ 00:20:55.125 20:38:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:55.125 * Looking for test storage... 00:20:55.125 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:55.125 20:38:13 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.125 20:38:13 -- nvmf/common.sh@7 -- # uname -s 00:20:55.125 20:38:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.125 20:38:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.125 20:38:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.125 20:38:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.125 20:38:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.125 20:38:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.125 20:38:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.125 20:38:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.125 20:38:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.125 20:38:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.125 20:38:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:55.125 20:38:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:20:55.125 20:38:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.125 20:38:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.125 20:38:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:55.125 20:38:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:55.125 20:38:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.125 20:38:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.125 20:38:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.125 20:38:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.125 20:38:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.125 20:38:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.125 20:38:13 -- paths/export.sh@5 -- # export PATH 00:20:55.125 20:38:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.125 20:38:13 -- nvmf/common.sh@46 -- # : 0 00:20:55.125 20:38:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:55.125 20:38:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:55.125 20:38:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:55.125 20:38:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.125 20:38:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.125 20:38:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:55.125 20:38:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:55.125 20:38:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:55.125 20:38:13 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:55.125 20:38:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:55.125 20:38:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.125 20:38:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:55.125 20:38:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:55.125 20:38:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:55.125 20:38:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.125 20:38:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.125 20:38:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.125 20:38:13 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:55.125 20:38:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:55.125 20:38:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:55.125 20:38:13 -- common/autotest_common.sh@10 -- # set +x 00:21:00.411 20:38:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:00.411 20:38:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:00.411 20:38:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:00.411 20:38:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:00.411 20:38:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:00.411 20:38:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:00.411 20:38:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:00.411 20:38:18 -- nvmf/common.sh@294 -- # net_devs=() 00:21:00.411 20:38:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:00.411 20:38:18 -- nvmf/common.sh@295 -- # e810=() 00:21:00.411 20:38:18 -- nvmf/common.sh@295 -- # local -ga e810 00:21:00.411 20:38:18 -- nvmf/common.sh@296 -- # x722=() 00:21:00.411 20:38:18 -- nvmf/common.sh@296 -- # local -ga x722 00:21:00.411 20:38:18 -- nvmf/common.sh@297 -- # mlx=() 00:21:00.411 20:38:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:00.411 20:38:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.411 20:38:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:00.411 20:38:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:00.411 20:38:18 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:00.411 20:38:18 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:00.411 20:38:18 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:00.411 20:38:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:00.411 20:38:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:00.411 20:38:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:00.411 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:00.411 20:38:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:00.411 20:38:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:00.411 20:38:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.411 20:38:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:00.412 20:38:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:00.412 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:00.412 20:38:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:00.412 20:38:18 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:00.412 20:38:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.412 20:38:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:00.412 20:38:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.412 20:38:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:00.412 Found net devices under 0000:27:00.0: cvl_0_0 00:21:00.412 20:38:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.412 20:38:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:00.412 20:38:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.412 20:38:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:00.412 20:38:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.412 20:38:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:00.412 Found net devices under 0000:27:00.1: cvl_0_1 00:21:00.412 20:38:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.412 20:38:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:00.412 20:38:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:00.412 20:38:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:00.412 20:38:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.412 20:38:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.412 20:38:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.412 20:38:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:00.412 20:38:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.412 20:38:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.412 20:38:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:00.412 20:38:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.412 20:38:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.412 20:38:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:00.412 20:38:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:00.412 20:38:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.412 20:38:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.412 20:38:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.412 20:38:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.412 20:38:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:00.412 20:38:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.412 20:38:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.412 20:38:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.412 20:38:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:00.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:21:00.412 00:21:00.412 --- 10.0.0.2 ping statistics --- 00:21:00.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.412 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:21:00.412 20:38:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:21:00.412 00:21:00.412 --- 10.0.0.1 ping statistics --- 00:21:00.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.412 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:21:00.412 20:38:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.412 20:38:18 -- nvmf/common.sh@410 -- # return 0 00:21:00.412 20:38:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:00.412 20:38:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.412 20:38:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:00.412 20:38:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.412 20:38:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:00.412 20:38:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:00.412 20:38:18 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:21:00.412 20:38:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:00.412 20:38:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:00.412 20:38:18 -- common/autotest_common.sh@10 -- # set +x 00:21:00.412 20:38:18 -- nvmf/common.sh@469 -- # nvmfpid=3544632 00:21:00.412 20:38:18 -- nvmf/common.sh@470 -- # waitforlisten 3544632 00:21:00.412 20:38:18 -- common/autotest_common.sh@819 -- # '[' -z 3544632 ']' 00:21:00.412 20:38:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.412 20:38:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:00.412 20:38:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.412 20:38:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:00.412 20:38:18 -- common/autotest_common.sh@10 -- # set +x 00:21:00.412 20:38:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.412 [2024-04-26 20:38:18.477991] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:00.412 [2024-04-26 20:38:18.478102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.412 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.412 [2024-04-26 20:38:18.603149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.412 [2024-04-26 20:38:18.706374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:00.412 [2024-04-26 20:38:18.706600] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.412 [2024-04-26 20:38:18.706616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.412 [2024-04-26 20:38:18.706627] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.412 [2024-04-26 20:38:18.706667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.983 20:38:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:00.983 20:38:19 -- common/autotest_common.sh@852 -- # return 0 00:21:00.983 20:38:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:00.983 20:38:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:00.983 20:38:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.983 20:38:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.983 20:38:19 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.983 20:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.983 20:38:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.983 [2024-04-26 20:38:19.206955] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.983 20:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.983 20:38:19 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:00.983 20:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.983 20:38:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.983 20:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.983 20:38:19 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.983 20:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.983 20:38:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.983 [2024-04-26 20:38:19.223165] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.983 20:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.983 20:38:19 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:00.983 20:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.983 20:38:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.983 NULL1 00:21:00.983 20:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.983 20:38:19 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:21:00.983 20:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.983 20:38:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.983 20:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.983 20:38:19 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:00.983 20:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.983 20:38:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.983 20:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.983 20:38:19 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:00.983 [2024-04-26 20:38:19.289568] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:00.983 [2024-04-26 20:38:19.289645] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3544838 ] 00:21:01.242 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.500 Attached to nqn.2016-06.io.spdk:cnode1 00:21:01.500 Namespace ID: 1 size: 1GB 00:21:01.500 fused_ordering(0) 00:21:01.500 fused_ordering(1) 00:21:01.500 fused_ordering(2) 00:21:01.500 fused_ordering(3) 00:21:01.500 fused_ordering(4) 00:21:01.500 fused_ordering(5) 00:21:01.500 fused_ordering(6) 00:21:01.500 fused_ordering(7) 00:21:01.500 fused_ordering(8) 00:21:01.500 fused_ordering(9) 00:21:01.500 fused_ordering(10) 00:21:01.500 fused_ordering(11) 00:21:01.500 fused_ordering(12) 00:21:01.500 fused_ordering(13) 00:21:01.500 fused_ordering(14) 00:21:01.500 fused_ordering(15) 00:21:01.500 fused_ordering(16) 00:21:01.500 fused_ordering(17) 00:21:01.500 fused_ordering(18) 00:21:01.500 fused_ordering(19) 00:21:01.500 fused_ordering(20) 00:21:01.500 fused_ordering(21) 00:21:01.500 fused_ordering(22) 00:21:01.500 fused_ordering(23) 00:21:01.500 fused_ordering(24) 00:21:01.500 fused_ordering(25) 00:21:01.500 fused_ordering(26) 00:21:01.500 fused_ordering(27) 00:21:01.500 fused_ordering(28) 00:21:01.500 fused_ordering(29) 00:21:01.500 fused_ordering(30) 00:21:01.500 fused_ordering(31) 00:21:01.500 fused_ordering(32) 00:21:01.500 fused_ordering(33) 00:21:01.500 fused_ordering(34) 00:21:01.500 fused_ordering(35) 00:21:01.500 fused_ordering(36) 00:21:01.500 fused_ordering(37) 00:21:01.500 fused_ordering(38) 00:21:01.500 fused_ordering(39) 00:21:01.500 fused_ordering(40) 00:21:01.500 fused_ordering(41) 00:21:01.500 fused_ordering(42) 00:21:01.500 fused_ordering(43) 00:21:01.500 fused_ordering(44) 00:21:01.500 fused_ordering(45) 00:21:01.500 fused_ordering(46) 00:21:01.500 fused_ordering(47) 00:21:01.500 fused_ordering(48) 00:21:01.500 fused_ordering(49) 00:21:01.500 fused_ordering(50) 00:21:01.500 fused_ordering(51) 00:21:01.500 fused_ordering(52) 00:21:01.500 fused_ordering(53) 00:21:01.500 fused_ordering(54) 00:21:01.500 fused_ordering(55) 00:21:01.500 fused_ordering(56) 00:21:01.500 fused_ordering(57) 00:21:01.500 fused_ordering(58) 00:21:01.500 fused_ordering(59) 00:21:01.500 fused_ordering(60) 00:21:01.500 fused_ordering(61) 00:21:01.500 fused_ordering(62) 00:21:01.500 fused_ordering(63) 00:21:01.500 fused_ordering(64) 00:21:01.500 fused_ordering(65) 00:21:01.500 fused_ordering(66) 00:21:01.500 fused_ordering(67) 00:21:01.500 fused_ordering(68) 00:21:01.500 fused_ordering(69) 00:21:01.500 fused_ordering(70) 00:21:01.500 fused_ordering(71) 00:21:01.500 fused_ordering(72) 00:21:01.500 fused_ordering(73) 00:21:01.500 fused_ordering(74) 00:21:01.500 fused_ordering(75) 00:21:01.500 fused_ordering(76) 00:21:01.500 fused_ordering(77) 00:21:01.500 fused_ordering(78) 00:21:01.500 fused_ordering(79) 00:21:01.500 fused_ordering(80) 00:21:01.500 fused_ordering(81) 00:21:01.500 fused_ordering(82) 00:21:01.500 fused_ordering(83) 00:21:01.500 fused_ordering(84) 00:21:01.500 fused_ordering(85) 00:21:01.500 fused_ordering(86) 00:21:01.500 fused_ordering(87) 00:21:01.500 fused_ordering(88) 00:21:01.500 fused_ordering(89) 00:21:01.500 fused_ordering(90) 00:21:01.500 fused_ordering(91) 00:21:01.500 fused_ordering(92) 00:21:01.500 fused_ordering(93) 00:21:01.500 fused_ordering(94) 00:21:01.500 fused_ordering(95) 00:21:01.500 fused_ordering(96) 00:21:01.500 fused_ordering(97) 00:21:01.500 fused_ordering(98) 00:21:01.500 fused_ordering(99) 00:21:01.500 fused_ordering(100) 00:21:01.500 fused_ordering(101) 00:21:01.500 fused_ordering(102) 00:21:01.500 fused_ordering(103) 00:21:01.500 fused_ordering(104) 00:21:01.500 fused_ordering(105) 00:21:01.500 fused_ordering(106) 00:21:01.500 fused_ordering(107) 00:21:01.500 fused_ordering(108) 00:21:01.500 fused_ordering(109) 00:21:01.500 fused_ordering(110) 00:21:01.500 fused_ordering(111) 00:21:01.500 fused_ordering(112) 00:21:01.500 fused_ordering(113) 00:21:01.500 fused_ordering(114) 00:21:01.500 fused_ordering(115) 00:21:01.500 fused_ordering(116) 00:21:01.500 fused_ordering(117) 00:21:01.500 fused_ordering(118) 00:21:01.500 fused_ordering(119) 00:21:01.500 fused_ordering(120) 00:21:01.500 fused_ordering(121) 00:21:01.500 fused_ordering(122) 00:21:01.500 fused_ordering(123) 00:21:01.500 fused_ordering(124) 00:21:01.500 fused_ordering(125) 00:21:01.500 fused_ordering(126) 00:21:01.500 fused_ordering(127) 00:21:01.500 fused_ordering(128) 00:21:01.500 fused_ordering(129) 00:21:01.500 fused_ordering(130) 00:21:01.500 fused_ordering(131) 00:21:01.500 fused_ordering(132) 00:21:01.500 fused_ordering(133) 00:21:01.500 fused_ordering(134) 00:21:01.500 fused_ordering(135) 00:21:01.500 fused_ordering(136) 00:21:01.500 fused_ordering(137) 00:21:01.500 fused_ordering(138) 00:21:01.500 fused_ordering(139) 00:21:01.500 fused_ordering(140) 00:21:01.500 fused_ordering(141) 00:21:01.500 fused_ordering(142) 00:21:01.500 fused_ordering(143) 00:21:01.500 fused_ordering(144) 00:21:01.500 fused_ordering(145) 00:21:01.500 fused_ordering(146) 00:21:01.500 fused_ordering(147) 00:21:01.500 fused_ordering(148) 00:21:01.500 fused_ordering(149) 00:21:01.500 fused_ordering(150) 00:21:01.500 fused_ordering(151) 00:21:01.500 fused_ordering(152) 00:21:01.500 fused_ordering(153) 00:21:01.500 fused_ordering(154) 00:21:01.500 fused_ordering(155) 00:21:01.500 fused_ordering(156) 00:21:01.500 fused_ordering(157) 00:21:01.500 fused_ordering(158) 00:21:01.500 fused_ordering(159) 00:21:01.500 fused_ordering(160) 00:21:01.500 fused_ordering(161) 00:21:01.500 fused_ordering(162) 00:21:01.500 fused_ordering(163) 00:21:01.500 fused_ordering(164) 00:21:01.500 fused_ordering(165) 00:21:01.500 fused_ordering(166) 00:21:01.500 fused_ordering(167) 00:21:01.500 fused_ordering(168) 00:21:01.500 fused_ordering(169) 00:21:01.500 fused_ordering(170) 00:21:01.500 fused_ordering(171) 00:21:01.500 fused_ordering(172) 00:21:01.500 fused_ordering(173) 00:21:01.500 fused_ordering(174) 00:21:01.500 fused_ordering(175) 00:21:01.500 fused_ordering(176) 00:21:01.500 fused_ordering(177) 00:21:01.500 fused_ordering(178) 00:21:01.500 fused_ordering(179) 00:21:01.500 fused_ordering(180) 00:21:01.500 fused_ordering(181) 00:21:01.500 fused_ordering(182) 00:21:01.500 fused_ordering(183) 00:21:01.500 fused_ordering(184) 00:21:01.500 fused_ordering(185) 00:21:01.500 fused_ordering(186) 00:21:01.500 fused_ordering(187) 00:21:01.500 fused_ordering(188) 00:21:01.500 fused_ordering(189) 00:21:01.500 fused_ordering(190) 00:21:01.500 fused_ordering(191) 00:21:01.500 fused_ordering(192) 00:21:01.500 fused_ordering(193) 00:21:01.500 fused_ordering(194) 00:21:01.500 fused_ordering(195) 00:21:01.500 fused_ordering(196) 00:21:01.500 fused_ordering(197) 00:21:01.500 fused_ordering(198) 00:21:01.500 fused_ordering(199) 00:21:01.500 fused_ordering(200) 00:21:01.500 fused_ordering(201) 00:21:01.500 fused_ordering(202) 00:21:01.500 fused_ordering(203) 00:21:01.500 fused_ordering(204) 00:21:01.500 fused_ordering(205) 00:21:02.068 fused_ordering(206) 00:21:02.068 fused_ordering(207) 00:21:02.068 fused_ordering(208) 00:21:02.068 fused_ordering(209) 00:21:02.068 fused_ordering(210) 00:21:02.068 fused_ordering(211) 00:21:02.068 fused_ordering(212) 00:21:02.068 fused_ordering(213) 00:21:02.068 fused_ordering(214) 00:21:02.068 fused_ordering(215) 00:21:02.068 fused_ordering(216) 00:21:02.068 fused_ordering(217) 00:21:02.068 fused_ordering(218) 00:21:02.068 fused_ordering(219) 00:21:02.068 fused_ordering(220) 00:21:02.068 fused_ordering(221) 00:21:02.068 fused_ordering(222) 00:21:02.068 fused_ordering(223) 00:21:02.068 fused_ordering(224) 00:21:02.068 fused_ordering(225) 00:21:02.068 fused_ordering(226) 00:21:02.068 fused_ordering(227) 00:21:02.068 fused_ordering(228) 00:21:02.068 fused_ordering(229) 00:21:02.068 fused_ordering(230) 00:21:02.068 fused_ordering(231) 00:21:02.068 fused_ordering(232) 00:21:02.068 fused_ordering(233) 00:21:02.068 fused_ordering(234) 00:21:02.068 fused_ordering(235) 00:21:02.068 fused_ordering(236) 00:21:02.068 fused_ordering(237) 00:21:02.068 fused_ordering(238) 00:21:02.068 fused_ordering(239) 00:21:02.068 fused_ordering(240) 00:21:02.068 fused_ordering(241) 00:21:02.068 fused_ordering(242) 00:21:02.068 fused_ordering(243) 00:21:02.068 fused_ordering(244) 00:21:02.068 fused_ordering(245) 00:21:02.068 fused_ordering(246) 00:21:02.068 fused_ordering(247) 00:21:02.068 fused_ordering(248) 00:21:02.068 fused_ordering(249) 00:21:02.068 fused_ordering(250) 00:21:02.068 fused_ordering(251) 00:21:02.068 fused_ordering(252) 00:21:02.068 fused_ordering(253) 00:21:02.068 fused_ordering(254) 00:21:02.068 fused_ordering(255) 00:21:02.068 fused_ordering(256) 00:21:02.068 fused_ordering(257) 00:21:02.068 fused_ordering(258) 00:21:02.068 fused_ordering(259) 00:21:02.068 fused_ordering(260) 00:21:02.068 fused_ordering(261) 00:21:02.068 fused_ordering(262) 00:21:02.068 fused_ordering(263) 00:21:02.068 fused_ordering(264) 00:21:02.068 fused_ordering(265) 00:21:02.068 fused_ordering(266) 00:21:02.068 fused_ordering(267) 00:21:02.068 fused_ordering(268) 00:21:02.068 fused_ordering(269) 00:21:02.068 fused_ordering(270) 00:21:02.068 fused_ordering(271) 00:21:02.068 fused_ordering(272) 00:21:02.068 fused_ordering(273) 00:21:02.068 fused_ordering(274) 00:21:02.068 fused_ordering(275) 00:21:02.068 fused_ordering(276) 00:21:02.068 fused_ordering(277) 00:21:02.068 fused_ordering(278) 00:21:02.068 fused_ordering(279) 00:21:02.068 fused_ordering(280) 00:21:02.068 fused_ordering(281) 00:21:02.068 fused_ordering(282) 00:21:02.068 fused_ordering(283) 00:21:02.068 fused_ordering(284) 00:21:02.068 fused_ordering(285) 00:21:02.068 fused_ordering(286) 00:21:02.068 fused_ordering(287) 00:21:02.068 fused_ordering(288) 00:21:02.068 fused_ordering(289) 00:21:02.068 fused_ordering(290) 00:21:02.068 fused_ordering(291) 00:21:02.068 fused_ordering(292) 00:21:02.068 fused_ordering(293) 00:21:02.068 fused_ordering(294) 00:21:02.068 fused_ordering(295) 00:21:02.068 fused_ordering(296) 00:21:02.068 fused_ordering(297) 00:21:02.068 fused_ordering(298) 00:21:02.068 fused_ordering(299) 00:21:02.068 fused_ordering(300) 00:21:02.068 fused_ordering(301) 00:21:02.068 fused_ordering(302) 00:21:02.068 fused_ordering(303) 00:21:02.068 fused_ordering(304) 00:21:02.068 fused_ordering(305) 00:21:02.068 fused_ordering(306) 00:21:02.068 fused_ordering(307) 00:21:02.068 fused_ordering(308) 00:21:02.068 fused_ordering(309) 00:21:02.068 fused_ordering(310) 00:21:02.068 fused_ordering(311) 00:21:02.068 fused_ordering(312) 00:21:02.068 fused_ordering(313) 00:21:02.068 fused_ordering(314) 00:21:02.068 fused_ordering(315) 00:21:02.068 fused_ordering(316) 00:21:02.068 fused_ordering(317) 00:21:02.068 fused_ordering(318) 00:21:02.068 fused_ordering(319) 00:21:02.068 fused_ordering(320) 00:21:02.068 fused_ordering(321) 00:21:02.068 fused_ordering(322) 00:21:02.068 fused_ordering(323) 00:21:02.068 fused_ordering(324) 00:21:02.068 fused_ordering(325) 00:21:02.068 fused_ordering(326) 00:21:02.068 fused_ordering(327) 00:21:02.068 fused_ordering(328) 00:21:02.068 fused_ordering(329) 00:21:02.068 fused_ordering(330) 00:21:02.068 fused_ordering(331) 00:21:02.068 fused_ordering(332) 00:21:02.068 fused_ordering(333) 00:21:02.068 fused_ordering(334) 00:21:02.068 fused_ordering(335) 00:21:02.068 fused_ordering(336) 00:21:02.068 fused_ordering(337) 00:21:02.068 fused_ordering(338) 00:21:02.068 fused_ordering(339) 00:21:02.068 fused_ordering(340) 00:21:02.068 fused_ordering(341) 00:21:02.068 fused_ordering(342) 00:21:02.068 fused_ordering(343) 00:21:02.068 fused_ordering(344) 00:21:02.068 fused_ordering(345) 00:21:02.068 fused_ordering(346) 00:21:02.068 fused_ordering(347) 00:21:02.068 fused_ordering(348) 00:21:02.068 fused_ordering(349) 00:21:02.068 fused_ordering(350) 00:21:02.068 fused_ordering(351) 00:21:02.068 fused_ordering(352) 00:21:02.068 fused_ordering(353) 00:21:02.068 fused_ordering(354) 00:21:02.068 fused_ordering(355) 00:21:02.068 fused_ordering(356) 00:21:02.068 fused_ordering(357) 00:21:02.068 fused_ordering(358) 00:21:02.068 fused_ordering(359) 00:21:02.068 fused_ordering(360) 00:21:02.068 fused_ordering(361) 00:21:02.068 fused_ordering(362) 00:21:02.068 fused_ordering(363) 00:21:02.068 fused_ordering(364) 00:21:02.068 fused_ordering(365) 00:21:02.068 fused_ordering(366) 00:21:02.068 fused_ordering(367) 00:21:02.068 fused_ordering(368) 00:21:02.068 fused_ordering(369) 00:21:02.068 fused_ordering(370) 00:21:02.068 fused_ordering(371) 00:21:02.068 fused_ordering(372) 00:21:02.068 fused_ordering(373) 00:21:02.068 fused_ordering(374) 00:21:02.068 fused_ordering(375) 00:21:02.068 fused_ordering(376) 00:21:02.068 fused_ordering(377) 00:21:02.068 fused_ordering(378) 00:21:02.068 fused_ordering(379) 00:21:02.068 fused_ordering(380) 00:21:02.068 fused_ordering(381) 00:21:02.068 fused_ordering(382) 00:21:02.068 fused_ordering(383) 00:21:02.068 fused_ordering(384) 00:21:02.068 fused_ordering(385) 00:21:02.068 fused_ordering(386) 00:21:02.068 fused_ordering(387) 00:21:02.069 fused_ordering(388) 00:21:02.069 fused_ordering(389) 00:21:02.069 fused_ordering(390) 00:21:02.069 fused_ordering(391) 00:21:02.069 fused_ordering(392) 00:21:02.069 fused_ordering(393) 00:21:02.069 fused_ordering(394) 00:21:02.069 fused_ordering(395) 00:21:02.069 fused_ordering(396) 00:21:02.069 fused_ordering(397) 00:21:02.069 fused_ordering(398) 00:21:02.069 fused_ordering(399) 00:21:02.069 fused_ordering(400) 00:21:02.069 fused_ordering(401) 00:21:02.069 fused_ordering(402) 00:21:02.069 fused_ordering(403) 00:21:02.069 fused_ordering(404) 00:21:02.069 fused_ordering(405) 00:21:02.069 fused_ordering(406) 00:21:02.069 fused_ordering(407) 00:21:02.069 fused_ordering(408) 00:21:02.069 fused_ordering(409) 00:21:02.069 fused_ordering(410) 00:21:02.329 fused_ordering(411) 00:21:02.329 fused_ordering(412) 00:21:02.329 fused_ordering(413) 00:21:02.329 fused_ordering(414) 00:21:02.329 fused_ordering(415) 00:21:02.329 fused_ordering(416) 00:21:02.329 fused_ordering(417) 00:21:02.329 fused_ordering(418) 00:21:02.329 fused_ordering(419) 00:21:02.329 fused_ordering(420) 00:21:02.329 fused_ordering(421) 00:21:02.329 fused_ordering(422) 00:21:02.329 fused_ordering(423) 00:21:02.329 fused_ordering(424) 00:21:02.329 fused_ordering(425) 00:21:02.329 fused_ordering(426) 00:21:02.329 fused_ordering(427) 00:21:02.329 fused_ordering(428) 00:21:02.329 fused_ordering(429) 00:21:02.329 fused_ordering(430) 00:21:02.329 fused_ordering(431) 00:21:02.329 fused_ordering(432) 00:21:02.329 fused_ordering(433) 00:21:02.329 fused_ordering(434) 00:21:02.329 fused_ordering(435) 00:21:02.329 fused_ordering(436) 00:21:02.329 fused_ordering(437) 00:21:02.329 fused_ordering(438) 00:21:02.329 fused_ordering(439) 00:21:02.329 fused_ordering(440) 00:21:02.329 fused_ordering(441) 00:21:02.329 fused_ordering(442) 00:21:02.329 fused_ordering(443) 00:21:02.329 fused_ordering(444) 00:21:02.329 fused_ordering(445) 00:21:02.329 fused_ordering(446) 00:21:02.329 fused_ordering(447) 00:21:02.329 fused_ordering(448) 00:21:02.329 fused_ordering(449) 00:21:02.329 fused_ordering(450) 00:21:02.329 fused_ordering(451) 00:21:02.329 fused_ordering(452) 00:21:02.329 fused_ordering(453) 00:21:02.329 fused_ordering(454) 00:21:02.329 fused_ordering(455) 00:21:02.329 fused_ordering(456) 00:21:02.329 fused_ordering(457) 00:21:02.329 fused_ordering(458) 00:21:02.329 fused_ordering(459) 00:21:02.329 fused_ordering(460) 00:21:02.330 fused_ordering(461) 00:21:02.330 fused_ordering(462) 00:21:02.330 fused_ordering(463) 00:21:02.330 fused_ordering(464) 00:21:02.330 fused_ordering(465) 00:21:02.330 fused_ordering(466) 00:21:02.330 fused_ordering(467) 00:21:02.330 fused_ordering(468) 00:21:02.330 fused_ordering(469) 00:21:02.330 fused_ordering(470) 00:21:02.330 fused_ordering(471) 00:21:02.330 fused_ordering(472) 00:21:02.330 fused_ordering(473) 00:21:02.330 fused_ordering(474) 00:21:02.330 fused_ordering(475) 00:21:02.330 fused_ordering(476) 00:21:02.330 fused_ordering(477) 00:21:02.330 fused_ordering(478) 00:21:02.330 fused_ordering(479) 00:21:02.330 fused_ordering(480) 00:21:02.330 fused_ordering(481) 00:21:02.330 fused_ordering(482) 00:21:02.330 fused_ordering(483) 00:21:02.330 fused_ordering(484) 00:21:02.330 fused_ordering(485) 00:21:02.330 fused_ordering(486) 00:21:02.330 fused_ordering(487) 00:21:02.330 fused_ordering(488) 00:21:02.330 fused_ordering(489) 00:21:02.330 fused_ordering(490) 00:21:02.330 fused_ordering(491) 00:21:02.330 fused_ordering(492) 00:21:02.330 fused_ordering(493) 00:21:02.330 fused_ordering(494) 00:21:02.330 fused_ordering(495) 00:21:02.330 fused_ordering(496) 00:21:02.330 fused_ordering(497) 00:21:02.330 fused_ordering(498) 00:21:02.330 fused_ordering(499) 00:21:02.330 fused_ordering(500) 00:21:02.330 fused_ordering(501) 00:21:02.330 fused_ordering(502) 00:21:02.330 fused_ordering(503) 00:21:02.330 fused_ordering(504) 00:21:02.330 fused_ordering(505) 00:21:02.330 fused_ordering(506) 00:21:02.330 fused_ordering(507) 00:21:02.330 fused_ordering(508) 00:21:02.330 fused_ordering(509) 00:21:02.330 fused_ordering(510) 00:21:02.330 fused_ordering(511) 00:21:02.330 fused_ordering(512) 00:21:02.330 fused_ordering(513) 00:21:02.330 fused_ordering(514) 00:21:02.330 fused_ordering(515) 00:21:02.330 fused_ordering(516) 00:21:02.330 fused_ordering(517) 00:21:02.330 fused_ordering(518) 00:21:02.330 fused_ordering(519) 00:21:02.330 fused_ordering(520) 00:21:02.330 fused_ordering(521) 00:21:02.330 fused_ordering(522) 00:21:02.330 fused_ordering(523) 00:21:02.330 fused_ordering(524) 00:21:02.330 fused_ordering(525) 00:21:02.330 fused_ordering(526) 00:21:02.330 fused_ordering(527) 00:21:02.330 fused_ordering(528) 00:21:02.330 fused_ordering(529) 00:21:02.330 fused_ordering(530) 00:21:02.330 fused_ordering(531) 00:21:02.330 fused_ordering(532) 00:21:02.330 fused_ordering(533) 00:21:02.330 fused_ordering(534) 00:21:02.330 fused_ordering(535) 00:21:02.330 fused_ordering(536) 00:21:02.330 fused_ordering(537) 00:21:02.330 fused_ordering(538) 00:21:02.330 fused_ordering(539) 00:21:02.330 fused_ordering(540) 00:21:02.330 fused_ordering(541) 00:21:02.330 fused_ordering(542) 00:21:02.330 fused_ordering(543) 00:21:02.330 fused_ordering(544) 00:21:02.330 fused_ordering(545) 00:21:02.330 fused_ordering(546) 00:21:02.330 fused_ordering(547) 00:21:02.330 fused_ordering(548) 00:21:02.330 fused_ordering(549) 00:21:02.330 fused_ordering(550) 00:21:02.330 fused_ordering(551) 00:21:02.330 fused_ordering(552) 00:21:02.330 fused_ordering(553) 00:21:02.330 fused_ordering(554) 00:21:02.330 fused_ordering(555) 00:21:02.330 fused_ordering(556) 00:21:02.330 fused_ordering(557) 00:21:02.330 fused_ordering(558) 00:21:02.330 fused_ordering(559) 00:21:02.330 fused_ordering(560) 00:21:02.330 fused_ordering(561) 00:21:02.330 fused_ordering(562) 00:21:02.330 fused_ordering(563) 00:21:02.330 fused_ordering(564) 00:21:02.330 fused_ordering(565) 00:21:02.330 fused_ordering(566) 00:21:02.330 fused_ordering(567) 00:21:02.330 fused_ordering(568) 00:21:02.330 fused_ordering(569) 00:21:02.330 fused_ordering(570) 00:21:02.330 fused_ordering(571) 00:21:02.330 fused_ordering(572) 00:21:02.330 fused_ordering(573) 00:21:02.330 fused_ordering(574) 00:21:02.330 fused_ordering(575) 00:21:02.330 fused_ordering(576) 00:21:02.330 fused_ordering(577) 00:21:02.330 fused_ordering(578) 00:21:02.330 fused_ordering(579) 00:21:02.330 fused_ordering(580) 00:21:02.330 fused_ordering(581) 00:21:02.330 fused_ordering(582) 00:21:02.330 fused_ordering(583) 00:21:02.330 fused_ordering(584) 00:21:02.330 fused_ordering(585) 00:21:02.330 fused_ordering(586) 00:21:02.330 fused_ordering(587) 00:21:02.330 fused_ordering(588) 00:21:02.330 fused_ordering(589) 00:21:02.330 fused_ordering(590) 00:21:02.330 fused_ordering(591) 00:21:02.330 fused_ordering(592) 00:21:02.330 fused_ordering(593) 00:21:02.330 fused_ordering(594) 00:21:02.330 fused_ordering(595) 00:21:02.330 fused_ordering(596) 00:21:02.330 fused_ordering(597) 00:21:02.330 fused_ordering(598) 00:21:02.330 fused_ordering(599) 00:21:02.330 fused_ordering(600) 00:21:02.330 fused_ordering(601) 00:21:02.330 fused_ordering(602) 00:21:02.330 fused_ordering(603) 00:21:02.330 fused_ordering(604) 00:21:02.330 fused_ordering(605) 00:21:02.330 fused_ordering(606) 00:21:02.330 fused_ordering(607) 00:21:02.330 fused_ordering(608) 00:21:02.330 fused_ordering(609) 00:21:02.330 fused_ordering(610) 00:21:02.330 fused_ordering(611) 00:21:02.330 fused_ordering(612) 00:21:02.330 fused_ordering(613) 00:21:02.330 fused_ordering(614) 00:21:02.330 fused_ordering(615) 00:21:02.591 fused_ordering(616) 00:21:02.591 fused_ordering(617) 00:21:02.591 fused_ordering(618) 00:21:02.591 fused_ordering(619) 00:21:02.591 fused_ordering(620) 00:21:02.591 fused_ordering(621) 00:21:02.591 fused_ordering(622) 00:21:02.591 fused_ordering(623) 00:21:02.591 fused_ordering(624) 00:21:02.591 fused_ordering(625) 00:21:02.591 fused_ordering(626) 00:21:02.591 fused_ordering(627) 00:21:02.591 fused_ordering(628) 00:21:02.591 fused_ordering(629) 00:21:02.591 fused_ordering(630) 00:21:02.591 fused_ordering(631) 00:21:02.591 fused_ordering(632) 00:21:02.591 fused_ordering(633) 00:21:02.591 fused_ordering(634) 00:21:02.591 fused_ordering(635) 00:21:02.591 fused_ordering(636) 00:21:02.591 fused_ordering(637) 00:21:02.591 fused_ordering(638) 00:21:02.591 fused_ordering(639) 00:21:02.591 fused_ordering(640) 00:21:02.591 fused_ordering(641) 00:21:02.591 fused_ordering(642) 00:21:02.591 fused_ordering(643) 00:21:02.591 fused_ordering(644) 00:21:02.591 fused_ordering(645) 00:21:02.591 fused_ordering(646) 00:21:02.591 fused_ordering(647) 00:21:02.591 fused_ordering(648) 00:21:02.591 fused_ordering(649) 00:21:02.591 fused_ordering(650) 00:21:02.591 fused_ordering(651) 00:21:02.591 fused_ordering(652) 00:21:02.591 fused_ordering(653) 00:21:02.591 fused_ordering(654) 00:21:02.591 fused_ordering(655) 00:21:02.591 fused_ordering(656) 00:21:02.591 fused_ordering(657) 00:21:02.591 fused_ordering(658) 00:21:02.591 fused_ordering(659) 00:21:02.591 fused_ordering(660) 00:21:02.591 fused_ordering(661) 00:21:02.591 fused_ordering(662) 00:21:02.591 fused_ordering(663) 00:21:02.591 fused_ordering(664) 00:21:02.591 fused_ordering(665) 00:21:02.591 fused_ordering(666) 00:21:02.591 fused_ordering(667) 00:21:02.591 fused_ordering(668) 00:21:02.591 fused_ordering(669) 00:21:02.591 fused_ordering(670) 00:21:02.591 fused_ordering(671) 00:21:02.591 fused_ordering(672) 00:21:02.591 fused_ordering(673) 00:21:02.591 fused_ordering(674) 00:21:02.591 fused_ordering(675) 00:21:02.591 fused_ordering(676) 00:21:02.591 fused_ordering(677) 00:21:02.591 fused_ordering(678) 00:21:02.591 fused_ordering(679) 00:21:02.591 fused_ordering(680) 00:21:02.591 fused_ordering(681) 00:21:02.591 fused_ordering(682) 00:21:02.591 fused_ordering(683) 00:21:02.591 fused_ordering(684) 00:21:02.591 fused_ordering(685) 00:21:02.591 fused_ordering(686) 00:21:02.591 fused_ordering(687) 00:21:02.591 fused_ordering(688) 00:21:02.591 fused_ordering(689) 00:21:02.591 fused_ordering(690) 00:21:02.591 fused_ordering(691) 00:21:02.591 fused_ordering(692) 00:21:02.591 fused_ordering(693) 00:21:02.591 fused_ordering(694) 00:21:02.591 fused_ordering(695) 00:21:02.591 fused_ordering(696) 00:21:02.591 fused_ordering(697) 00:21:02.591 fused_ordering(698) 00:21:02.591 fused_ordering(699) 00:21:02.591 fused_ordering(700) 00:21:02.591 fused_ordering(701) 00:21:02.591 fused_ordering(702) 00:21:02.591 fused_ordering(703) 00:21:02.591 fused_ordering(704) 00:21:02.591 fused_ordering(705) 00:21:02.591 fused_ordering(706) 00:21:02.591 fused_ordering(707) 00:21:02.591 fused_ordering(708) 00:21:02.591 fused_ordering(709) 00:21:02.591 fused_ordering(710) 00:21:02.591 fused_ordering(711) 00:21:02.591 fused_ordering(712) 00:21:02.591 fused_ordering(713) 00:21:02.591 fused_ordering(714) 00:21:02.591 fused_ordering(715) 00:21:02.591 fused_ordering(716) 00:21:02.591 fused_ordering(717) 00:21:02.591 fused_ordering(718) 00:21:02.591 fused_ordering(719) 00:21:02.591 fused_ordering(720) 00:21:02.591 fused_ordering(721) 00:21:02.591 fused_ordering(722) 00:21:02.591 fused_ordering(723) 00:21:02.591 fused_ordering(724) 00:21:02.591 fused_ordering(725) 00:21:02.591 fused_ordering(726) 00:21:02.591 fused_ordering(727) 00:21:02.591 fused_ordering(728) 00:21:02.591 fused_ordering(729) 00:21:02.591 fused_ordering(730) 00:21:02.591 fused_ordering(731) 00:21:02.591 fused_ordering(732) 00:21:02.591 fused_ordering(733) 00:21:02.591 fused_ordering(734) 00:21:02.591 fused_ordering(735) 00:21:02.591 fused_ordering(736) 00:21:02.591 fused_ordering(737) 00:21:02.591 fused_ordering(738) 00:21:02.591 fused_ordering(739) 00:21:02.591 fused_ordering(740) 00:21:02.591 fused_ordering(741) 00:21:02.591 fused_ordering(742) 00:21:02.591 fused_ordering(743) 00:21:02.591 fused_ordering(744) 00:21:02.591 fused_ordering(745) 00:21:02.591 fused_ordering(746) 00:21:02.591 fused_ordering(747) 00:21:02.591 fused_ordering(748) 00:21:02.591 fused_ordering(749) 00:21:02.591 fused_ordering(750) 00:21:02.591 fused_ordering(751) 00:21:02.591 fused_ordering(752) 00:21:02.591 fused_ordering(753) 00:21:02.591 fused_ordering(754) 00:21:02.591 fused_ordering(755) 00:21:02.591 fused_ordering(756) 00:21:02.592 fused_ordering(757) 00:21:02.592 fused_ordering(758) 00:21:02.592 fused_ordering(759) 00:21:02.592 fused_ordering(760) 00:21:02.592 fused_ordering(761) 00:21:02.592 fused_ordering(762) 00:21:02.592 fused_ordering(763) 00:21:02.592 fused_ordering(764) 00:21:02.592 fused_ordering(765) 00:21:02.592 fused_ordering(766) 00:21:02.592 fused_ordering(767) 00:21:02.592 fused_ordering(768) 00:21:02.592 fused_ordering(769) 00:21:02.592 fused_ordering(770) 00:21:02.592 fused_ordering(771) 00:21:02.592 fused_ordering(772) 00:21:02.592 fused_ordering(773) 00:21:02.592 fused_ordering(774) 00:21:02.592 fused_ordering(775) 00:21:02.592 fused_ordering(776) 00:21:02.592 fused_ordering(777) 00:21:02.592 fused_ordering(778) 00:21:02.592 fused_ordering(779) 00:21:02.592 fused_ordering(780) 00:21:02.592 fused_ordering(781) 00:21:02.592 fused_ordering(782) 00:21:02.592 fused_ordering(783) 00:21:02.592 fused_ordering(784) 00:21:02.592 fused_ordering(785) 00:21:02.592 fused_ordering(786) 00:21:02.592 fused_ordering(787) 00:21:02.592 fused_ordering(788) 00:21:02.592 fused_ordering(789) 00:21:02.592 fused_ordering(790) 00:21:02.592 fused_ordering(791) 00:21:02.592 fused_ordering(792) 00:21:02.592 fused_ordering(793) 00:21:02.592 fused_ordering(794) 00:21:02.592 fused_ordering(795) 00:21:02.592 fused_ordering(796) 00:21:02.592 fused_ordering(797) 00:21:02.592 fused_ordering(798) 00:21:02.592 fused_ordering(799) 00:21:02.592 fused_ordering(800) 00:21:02.592 fused_ordering(801) 00:21:02.592 fused_ordering(802) 00:21:02.592 fused_ordering(803) 00:21:02.592 fused_ordering(804) 00:21:02.592 fused_ordering(805) 00:21:02.592 fused_ordering(806) 00:21:02.592 fused_ordering(807) 00:21:02.592 fused_ordering(808) 00:21:02.592 fused_ordering(809) 00:21:02.592 fused_ordering(810) 00:21:02.592 fused_ordering(811) 00:21:02.592 fused_ordering(812) 00:21:02.592 fused_ordering(813) 00:21:02.592 fused_ordering(814) 00:21:02.592 fused_ordering(815) 00:21:02.592 fused_ordering(816) 00:21:02.592 fused_ordering(817) 00:21:02.592 fused_ordering(818) 00:21:02.592 fused_ordering(819) 00:21:02.592 fused_ordering(820) 00:21:02.851 fused_ordering(821) 00:21:02.851 fused_ordering(822) 00:21:02.851 fused_ordering(823) 00:21:02.851 fused_ordering(824) 00:21:02.851 fused_ordering(825) 00:21:02.851 fused_ordering(826) 00:21:02.851 fused_ordering(827) 00:21:02.851 fused_ordering(828) 00:21:02.851 fused_ordering(829) 00:21:02.851 fused_ordering(830) 00:21:02.851 fused_ordering(831) 00:21:02.851 fused_ordering(832) 00:21:02.851 fused_ordering(833) 00:21:02.851 fused_ordering(834) 00:21:02.851 fused_ordering(835) 00:21:02.851 fused_ordering(836) 00:21:02.851 fused_ordering(837) 00:21:02.851 fused_ordering(838) 00:21:02.851 fused_ordering(839) 00:21:02.851 fused_ordering(840) 00:21:02.851 fused_ordering(841) 00:21:02.851 fused_ordering(842) 00:21:02.851 fused_ordering(843) 00:21:02.851 fused_ordering(844) 00:21:02.851 fused_ordering(845) 00:21:02.851 fused_ordering(846) 00:21:02.851 fused_ordering(847) 00:21:02.851 fused_ordering(848) 00:21:02.851 fused_ordering(849) 00:21:02.851 fused_ordering(850) 00:21:02.851 fused_ordering(851) 00:21:02.851 fused_ordering(852) 00:21:02.851 fused_ordering(853) 00:21:02.851 fused_ordering(854) 00:21:02.851 fused_ordering(855) 00:21:02.851 fused_ordering(856) 00:21:02.851 fused_ordering(857) 00:21:02.851 fused_ordering(858) 00:21:02.851 fused_ordering(859) 00:21:02.851 fused_ordering(860) 00:21:02.851 fused_ordering(861) 00:21:02.851 fused_ordering(862) 00:21:02.851 fused_ordering(863) 00:21:02.851 fused_ordering(864) 00:21:02.851 fused_ordering(865) 00:21:02.851 fused_ordering(866) 00:21:02.851 fused_ordering(867) 00:21:02.851 fused_ordering(868) 00:21:02.851 fused_ordering(869) 00:21:02.851 fused_ordering(870) 00:21:02.851 fused_ordering(871) 00:21:02.851 fused_ordering(872) 00:21:02.851 fused_ordering(873) 00:21:02.851 fused_ordering(874) 00:21:02.851 fused_ordering(875) 00:21:02.851 fused_ordering(876) 00:21:02.851 fused_ordering(877) 00:21:02.851 fused_ordering(878) 00:21:02.851 fused_ordering(879) 00:21:02.851 fused_ordering(880) 00:21:02.851 fused_ordering(881) 00:21:02.851 fused_ordering(882) 00:21:02.851 fused_ordering(883) 00:21:02.851 fused_ordering(884) 00:21:02.851 fused_ordering(885) 00:21:02.851 fused_ordering(886) 00:21:02.851 fused_ordering(887) 00:21:02.851 fused_ordering(888) 00:21:02.851 fused_ordering(889) 00:21:02.851 fused_ordering(890) 00:21:02.851 fused_ordering(891) 00:21:02.851 fused_ordering(892) 00:21:02.851 fused_ordering(893) 00:21:02.851 fused_ordering(894) 00:21:02.851 fused_ordering(895) 00:21:02.851 fused_ordering(896) 00:21:02.851 fused_ordering(897) 00:21:02.851 fused_ordering(898) 00:21:02.851 fused_ordering(899) 00:21:02.851 fused_ordering(900) 00:21:02.851 fused_ordering(901) 00:21:02.851 fused_ordering(902) 00:21:02.851 fused_ordering(903) 00:21:02.851 fused_ordering(904) 00:21:02.851 fused_ordering(905) 00:21:02.851 fused_ordering(906) 00:21:02.851 fused_ordering(907) 00:21:02.851 fused_ordering(908) 00:21:02.851 fused_ordering(909) 00:21:02.851 fused_ordering(910) 00:21:02.851 fused_ordering(911) 00:21:02.851 fused_ordering(912) 00:21:02.851 fused_ordering(913) 00:21:02.851 fused_ordering(914) 00:21:02.851 fused_ordering(915) 00:21:02.851 fused_ordering(916) 00:21:02.851 fused_ordering(917) 00:21:02.851 fused_ordering(918) 00:21:02.851 fused_ordering(919) 00:21:02.851 fused_ordering(920) 00:21:02.851 fused_ordering(921) 00:21:02.851 fused_ordering(922) 00:21:02.851 fused_ordering(923) 00:21:02.851 fused_ordering(924) 00:21:02.851 fused_ordering(925) 00:21:02.851 fused_ordering(926) 00:21:02.851 fused_ordering(927) 00:21:02.851 fused_ordering(928) 00:21:02.851 fused_ordering(929) 00:21:02.851 fused_ordering(930) 00:21:02.851 fused_ordering(931) 00:21:02.851 fused_ordering(932) 00:21:02.851 fused_ordering(933) 00:21:02.851 fused_ordering(934) 00:21:02.851 fused_ordering(935) 00:21:02.851 fused_ordering(936) 00:21:02.851 fused_ordering(937) 00:21:02.851 fused_ordering(938) 00:21:02.851 fused_ordering(939) 00:21:02.851 fused_ordering(940) 00:21:02.851 fused_ordering(941) 00:21:02.851 fused_ordering(942) 00:21:02.851 fused_ordering(943) 00:21:02.851 fused_ordering(944) 00:21:02.851 fused_ordering(945) 00:21:02.851 fused_ordering(946) 00:21:02.851 fused_ordering(947) 00:21:02.851 fused_ordering(948) 00:21:02.851 fused_ordering(949) 00:21:02.851 fused_ordering(950) 00:21:02.851 fused_ordering(951) 00:21:02.851 fused_ordering(952) 00:21:02.851 fused_ordering(953) 00:21:02.851 fused_ordering(954) 00:21:02.851 fused_ordering(955) 00:21:02.851 fused_ordering(956) 00:21:02.851 fused_ordering(957) 00:21:02.851 fused_ordering(958) 00:21:02.851 fused_ordering(959) 00:21:02.851 fused_ordering(960) 00:21:02.851 fused_ordering(961) 00:21:02.851 fused_ordering(962) 00:21:02.851 fused_ordering(963) 00:21:02.851 fused_ordering(964) 00:21:02.851 fused_ordering(965) 00:21:02.851 fused_ordering(966) 00:21:02.851 fused_ordering(967) 00:21:02.851 fused_ordering(968) 00:21:02.851 fused_ordering(969) 00:21:02.851 fused_ordering(970) 00:21:02.851 fused_ordering(971) 00:21:02.851 fused_ordering(972) 00:21:02.851 fused_ordering(973) 00:21:02.851 fused_ordering(974) 00:21:02.851 fused_ordering(975) 00:21:02.851 fused_ordering(976) 00:21:02.851 fused_ordering(977) 00:21:02.851 fused_ordering(978) 00:21:02.851 fused_ordering(979) 00:21:02.851 fused_ordering(980) 00:21:02.851 fused_ordering(981) 00:21:02.851 fused_ordering(982) 00:21:02.851 fused_ordering(983) 00:21:02.851 fused_ordering(984) 00:21:02.851 fused_ordering(985) 00:21:02.851 fused_ordering(986) 00:21:02.851 fused_ordering(987) 00:21:02.851 fused_ordering(988) 00:21:02.851 fused_ordering(989) 00:21:02.851 fused_ordering(990) 00:21:02.851 fused_ordering(991) 00:21:02.851 fused_ordering(992) 00:21:02.851 fused_ordering(993) 00:21:02.851 fused_ordering(994) 00:21:02.851 fused_ordering(995) 00:21:02.851 fused_ordering(996) 00:21:02.851 fused_ordering(997) 00:21:02.851 fused_ordering(998) 00:21:02.851 fused_ordering(999) 00:21:02.851 fused_ordering(1000) 00:21:02.851 fused_ordering(1001) 00:21:02.851 fused_ordering(1002) 00:21:02.851 fused_ordering(1003) 00:21:02.851 fused_ordering(1004) 00:21:02.851 fused_ordering(1005) 00:21:02.851 fused_ordering(1006) 00:21:02.851 fused_ordering(1007) 00:21:02.851 fused_ordering(1008) 00:21:02.851 fused_ordering(1009) 00:21:02.851 fused_ordering(1010) 00:21:02.851 fused_ordering(1011) 00:21:02.851 fused_ordering(1012) 00:21:02.851 fused_ordering(1013) 00:21:02.851 fused_ordering(1014) 00:21:02.851 fused_ordering(1015) 00:21:02.851 fused_ordering(1016) 00:21:02.851 fused_ordering(1017) 00:21:02.851 fused_ordering(1018) 00:21:02.851 fused_ordering(1019) 00:21:02.851 fused_ordering(1020) 00:21:02.851 fused_ordering(1021) 00:21:02.851 fused_ordering(1022) 00:21:02.851 fused_ordering(1023) 00:21:02.851 20:38:21 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:21:02.851 20:38:21 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:21:02.851 20:38:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:02.851 20:38:21 -- nvmf/common.sh@116 -- # sync 00:21:02.851 20:38:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:02.851 20:38:21 -- nvmf/common.sh@119 -- # set +e 00:21:02.851 20:38:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:02.851 20:38:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:02.851 rmmod nvme_tcp 00:21:02.851 rmmod nvme_fabrics 00:21:03.109 rmmod nvme_keyring 00:21:03.109 20:38:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:03.109 20:38:21 -- nvmf/common.sh@123 -- # set -e 00:21:03.109 20:38:21 -- nvmf/common.sh@124 -- # return 0 00:21:03.109 20:38:21 -- nvmf/common.sh@477 -- # '[' -n 3544632 ']' 00:21:03.109 20:38:21 -- nvmf/common.sh@478 -- # killprocess 3544632 00:21:03.109 20:38:21 -- common/autotest_common.sh@926 -- # '[' -z 3544632 ']' 00:21:03.109 20:38:21 -- common/autotest_common.sh@930 -- # kill -0 3544632 00:21:03.109 20:38:21 -- common/autotest_common.sh@931 -- # uname 00:21:03.109 20:38:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:03.109 20:38:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3544632 00:21:03.109 20:38:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:03.109 20:38:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:03.109 20:38:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3544632' 00:21:03.109 killing process with pid 3544632 00:21:03.109 20:38:21 -- common/autotest_common.sh@945 -- # kill 3544632 00:21:03.109 20:38:21 -- common/autotest_common.sh@950 -- # wait 3544632 00:21:03.676 20:38:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:03.676 20:38:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:03.676 20:38:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:03.676 20:38:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.676 20:38:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:03.676 20:38:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.676 20:38:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.676 20:38:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.585 20:38:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:05.585 00:21:05.585 real 0m10.822s 00:21:05.585 user 0m6.187s 00:21:05.585 sys 0m4.816s 00:21:05.585 20:38:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.585 20:38:23 -- common/autotest_common.sh@10 -- # set +x 00:21:05.585 ************************************ 00:21:05.585 END TEST nvmf_fused_ordering 00:21:05.585 ************************************ 00:21:05.585 20:38:23 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:21:05.585 20:38:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:05.585 20:38:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:05.585 20:38:23 -- common/autotest_common.sh@10 -- # set +x 00:21:05.585 ************************************ 00:21:05.585 START TEST nvmf_delete_subsystem 00:21:05.585 ************************************ 00:21:05.585 20:38:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:21:05.585 * Looking for test storage... 00:21:05.585 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:05.585 20:38:23 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.585 20:38:23 -- nvmf/common.sh@7 -- # uname -s 00:21:05.585 20:38:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.585 20:38:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.585 20:38:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.585 20:38:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.585 20:38:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.585 20:38:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.585 20:38:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.585 20:38:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.585 20:38:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.585 20:38:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.585 20:38:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:05.585 20:38:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:05.585 20:38:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.585 20:38:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.585 20:38:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:05.585 20:38:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:05.585 20:38:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.585 20:38:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.585 20:38:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.585 20:38:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.585 20:38:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.585 20:38:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.585 20:38:23 -- paths/export.sh@5 -- # export PATH 00:21:05.585 20:38:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.585 20:38:23 -- nvmf/common.sh@46 -- # : 0 00:21:05.585 20:38:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:05.585 20:38:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:05.585 20:38:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:05.585 20:38:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.585 20:38:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.585 20:38:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:05.585 20:38:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:05.585 20:38:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:05.585 20:38:23 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:21:05.585 20:38:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:05.846 20:38:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.846 20:38:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:05.846 20:38:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:05.846 20:38:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:05.846 20:38:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.846 20:38:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.846 20:38:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.846 20:38:23 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:05.846 20:38:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:05.846 20:38:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:05.846 20:38:23 -- common/autotest_common.sh@10 -- # set +x 00:21:11.123 20:38:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:11.123 20:38:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:11.123 20:38:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:11.123 20:38:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:11.123 20:38:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:11.123 20:38:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:11.123 20:38:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:11.123 20:38:28 -- nvmf/common.sh@294 -- # net_devs=() 00:21:11.123 20:38:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:11.123 20:38:28 -- nvmf/common.sh@295 -- # e810=() 00:21:11.123 20:38:28 -- nvmf/common.sh@295 -- # local -ga e810 00:21:11.123 20:38:28 -- nvmf/common.sh@296 -- # x722=() 00:21:11.123 20:38:28 -- nvmf/common.sh@296 -- # local -ga x722 00:21:11.123 20:38:28 -- nvmf/common.sh@297 -- # mlx=() 00:21:11.123 20:38:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:11.123 20:38:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.123 20:38:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:11.123 20:38:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:11.123 20:38:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.123 20:38:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:11.123 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:11.123 20:38:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.123 20:38:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:11.123 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:11.123 20:38:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:11.123 20:38:28 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.123 20:38:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.123 20:38:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.123 20:38:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.123 20:38:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:11.123 Found net devices under 0000:27:00.0: cvl_0_0 00:21:11.123 20:38:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.123 20:38:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.123 20:38:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.123 20:38:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.123 20:38:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.123 20:38:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:11.123 Found net devices under 0000:27:00.1: cvl_0_1 00:21:11.123 20:38:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.123 20:38:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:11.123 20:38:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:11.123 20:38:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:11.123 20:38:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.123 20:38:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.123 20:38:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.123 20:38:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:11.123 20:38:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.123 20:38:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.123 20:38:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:11.123 20:38:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.123 20:38:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.123 20:38:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:11.123 20:38:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:11.123 20:38:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.123 20:38:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.123 20:38:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.123 20:38:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.123 20:38:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:11.123 20:38:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.123 20:38:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.123 20:38:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.123 20:38:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:11.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:21:11.123 00:21:11.123 --- 10.0.0.2 ping statistics --- 00:21:11.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.123 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:21:11.123 20:38:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:21:11.123 00:21:11.123 --- 10.0.0.1 ping statistics --- 00:21:11.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.123 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:11.123 20:38:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.123 20:38:28 -- nvmf/common.sh@410 -- # return 0 00:21:11.123 20:38:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:11.123 20:38:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.123 20:38:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:11.123 20:38:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.123 20:38:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:11.123 20:38:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:11.123 20:38:29 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:21:11.123 20:38:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:11.123 20:38:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:11.123 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.123 20:38:29 -- nvmf/common.sh@469 -- # nvmfpid=3549047 00:21:11.123 20:38:29 -- nvmf/common.sh@470 -- # waitforlisten 3549047 00:21:11.123 20:38:29 -- common/autotest_common.sh@819 -- # '[' -z 3549047 ']' 00:21:11.123 20:38:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.123 20:38:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:11.123 20:38:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.123 20:38:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:11.123 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.123 20:38:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:11.123 [2024-04-26 20:38:29.063436] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:11.123 [2024-04-26 20:38:29.063504] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.123 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.123 [2024-04-26 20:38:29.152254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:11.123 [2024-04-26 20:38:29.246218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:11.123 [2024-04-26 20:38:29.246390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.123 [2024-04-26 20:38:29.246403] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.123 [2024-04-26 20:38:29.246412] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.123 [2024-04-26 20:38:29.246466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.123 [2024-04-26 20:38:29.246468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.695 20:38:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:11.695 20:38:29 -- common/autotest_common.sh@852 -- # return 0 00:21:11.695 20:38:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:11.695 20:38:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:11.695 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.695 20:38:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.695 20:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.695 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.695 [2024-04-26 20:38:29.805743] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.695 20:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:11.695 20:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.695 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.695 20:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.695 20:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.695 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.695 [2024-04-26 20:38:29.821918] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.695 20:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:11.695 20:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.695 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.695 NULL1 00:21:11.695 20:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:11.695 20:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.695 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.695 Delay0 00:21:11.695 20:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:11.695 20:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.695 20:38:29 -- common/autotest_common.sh@10 -- # set +x 00:21:11.695 20:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@28 -- # perf_pid=3549354 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@30 -- # sleep 2 00:21:11.695 20:38:29 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:21:11.695 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.695 [2024-04-26 20:38:29.946814] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.668 20:38:31 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.668 20:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.668 20:38:31 -- common/autotest_common.sh@10 -- # set +x 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 [2024-04-26 20:38:32.214314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002a40 is same with the state(5) to be set 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 starting I/O failed: -6 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 [2024-04-26 20:38:32.215157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61300000ffc0 is same with the state(5) to be set 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Write completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.927 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:13.928 Write completed with error (sct=0, sc=8) 00:21:13.928 Read completed with error (sct=0, sc=8) 00:21:14.864 [2024-04-26 20:38:33.170701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002180 is same with the state(5) to be set 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 [2024-04-26 20:38:33.214358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000106c0 is same with the state(5) to be set 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.137 Write completed with error (sct=0, sc=8) 00:21:15.137 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 [2024-04-26 20:38:33.216245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002340 is same with the state(5) to be set 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 Write completed with error (sct=0, sc=8) 00:21:15.138 Read completed with error (sct=0, sc=8) 00:21:15.138 [2024-04-26 20:38:33.216633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000026c0 is same with the state(5) to be set 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 Read completed with error (sct=0, sc=8) 00:21:15.139 Write completed with error (sct=0, sc=8) 00:21:15.139 [2024-04-26 20:38:33.216867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002dc0 is same with the state(5) to be set 00:21:15.139 20:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.139 20:38:33 -- target/delete_subsystem.sh@34 -- # delay=0 00:21:15.139 [2024-04-26 20:38:33.219054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000002180 (9): Bad file descriptor 00:21:15.140 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:15.140 20:38:33 -- target/delete_subsystem.sh@35 -- # kill -0 3549354 00:21:15.140 20:38:33 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:21:15.140 Initializing NVMe Controllers 00:21:15.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.140 Controller IO queue size 128, less than required. 00:21:15.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:21:15.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:21:15.140 Initialization complete. Launching workers. 00:21:15.140 ======================================================== 00:21:15.140 Latency(us) 00:21:15.140 Device Information : IOPS MiB/s Average min max 00:21:15.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.67 0.10 944179.67 1514.07 1012936.34 00:21:15.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.93 0.08 867524.36 415.69 1013090.99 00:21:15.140 ======================================================== 00:21:15.140 Total : 353.60 0.17 909943.17 415.69 1013090.99 00:21:15.140 00:21:15.406 20:38:33 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:21:15.406 20:38:33 -- target/delete_subsystem.sh@35 -- # kill -0 3549354 00:21:15.406 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3549354) - No such process 00:21:15.406 20:38:33 -- target/delete_subsystem.sh@45 -- # NOT wait 3549354 00:21:15.406 20:38:33 -- common/autotest_common.sh@640 -- # local es=0 00:21:15.406 20:38:33 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3549354 00:21:15.406 20:38:33 -- common/autotest_common.sh@628 -- # local arg=wait 00:21:15.406 20:38:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:15.406 20:38:33 -- common/autotest_common.sh@632 -- # type -t wait 00:21:15.406 20:38:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:15.406 20:38:33 -- common/autotest_common.sh@643 -- # wait 3549354 00:21:15.406 20:38:33 -- common/autotest_common.sh@643 -- # es=1 00:21:15.406 20:38:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:15.406 20:38:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:15.406 20:38:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:15.406 20:38:33 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:15.406 20:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.406 20:38:33 -- common/autotest_common.sh@10 -- # set +x 00:21:15.406 20:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.406 20:38:33 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.406 20:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.406 20:38:33 -- common/autotest_common.sh@10 -- # set +x 00:21:15.406 [2024-04-26 20:38:33.742429] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.406 20:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.406 20:38:33 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:15.406 20:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.406 20:38:33 -- common/autotest_common.sh@10 -- # set +x 00:21:15.665 20:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.665 20:38:33 -- target/delete_subsystem.sh@54 -- # perf_pid=3550016 00:21:15.665 20:38:33 -- target/delete_subsystem.sh@56 -- # delay=0 00:21:15.665 20:38:33 -- target/delete_subsystem.sh@57 -- # kill -0 3550016 00:21:15.665 20:38:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:15.665 20:38:33 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:21:15.665 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.665 [2024-04-26 20:38:33.842920] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:15.925 20:38:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:15.925 20:38:34 -- target/delete_subsystem.sh@57 -- # kill -0 3550016 00:21:15.925 20:38:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:16.496 20:38:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:16.496 20:38:34 -- target/delete_subsystem.sh@57 -- # kill -0 3550016 00:21:16.496 20:38:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:17.067 20:38:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:17.067 20:38:35 -- target/delete_subsystem.sh@57 -- # kill -0 3550016 00:21:17.067 20:38:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:17.639 20:38:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:17.639 20:38:35 -- target/delete_subsystem.sh@57 -- # kill -0 3550016 00:21:17.639 20:38:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:18.211 20:38:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:18.211 20:38:36 -- target/delete_subsystem.sh@57 -- # kill -0 3550016 00:21:18.211 20:38:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:18.472 20:38:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:18.472 20:38:36 -- target/delete_subsystem.sh@57 -- # kill -0 3550016 00:21:18.472 20:38:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:19.041 Initializing NVMe Controllers 00:21:19.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.041 Controller IO queue size 128, less than required. 00:21:19.041 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:21:19.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:21:19.041 Initialization complete. Launching workers. 00:21:19.041 ======================================================== 00:21:19.041 Latency(us) 00:21:19.041 Device Information : IOPS MiB/s Average min max 00:21:19.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002713.43 1000193.91 1009302.15 00:21:19.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004934.49 1000271.98 1041658.12 00:21:19.041 ======================================================== 00:21:19.041 Total : 256.00 0.12 1003823.96 1000193.91 1041658.12 00:21:19.041 00:21:19.041 20:38:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:19.041 20:38:37 -- target/delete_subsystem.sh@57 -- # kill -0 3550016 00:21:19.041 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3550016) - No such process 00:21:19.041 20:38:37 -- target/delete_subsystem.sh@67 -- # wait 3550016 00:21:19.041 20:38:37 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:19.041 20:38:37 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:21:19.041 20:38:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:19.041 20:38:37 -- nvmf/common.sh@116 -- # sync 00:21:19.041 20:38:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:19.041 20:38:37 -- nvmf/common.sh@119 -- # set +e 00:21:19.041 20:38:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:19.041 20:38:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:19.041 rmmod nvme_tcp 00:21:19.042 rmmod nvme_fabrics 00:21:19.042 rmmod nvme_keyring 00:21:19.042 20:38:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:19.042 20:38:37 -- nvmf/common.sh@123 -- # set -e 00:21:19.042 20:38:37 -- nvmf/common.sh@124 -- # return 0 00:21:19.042 20:38:37 -- nvmf/common.sh@477 -- # '[' -n 3549047 ']' 00:21:19.042 20:38:37 -- nvmf/common.sh@478 -- # killprocess 3549047 00:21:19.042 20:38:37 -- common/autotest_common.sh@926 -- # '[' -z 3549047 ']' 00:21:19.042 20:38:37 -- common/autotest_common.sh@930 -- # kill -0 3549047 00:21:19.042 20:38:37 -- common/autotest_common.sh@931 -- # uname 00:21:19.042 20:38:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:19.042 20:38:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3549047 00:21:19.301 20:38:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:19.301 20:38:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:19.301 20:38:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3549047' 00:21:19.301 killing process with pid 3549047 00:21:19.301 20:38:37 -- common/autotest_common.sh@945 -- # kill 3549047 00:21:19.301 20:38:37 -- common/autotest_common.sh@950 -- # wait 3549047 00:21:19.559 20:38:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:19.559 20:38:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:19.559 20:38:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:19.559 20:38:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.559 20:38:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:19.559 20:38:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.559 20:38:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.559 20:38:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.094 20:38:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:22.094 00:21:22.094 real 0m16.073s 00:21:22.094 user 0m31.040s 00:21:22.094 sys 0m4.541s 00:21:22.094 20:38:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.094 20:38:39 -- common/autotest_common.sh@10 -- # set +x 00:21:22.094 ************************************ 00:21:22.094 END TEST nvmf_delete_subsystem 00:21:22.094 ************************************ 00:21:22.094 20:38:39 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:21:22.094 20:38:39 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:21:22.094 20:38:39 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:21:22.094 20:38:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:22.094 20:38:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:22.094 20:38:39 -- common/autotest_common.sh@10 -- # set +x 00:21:22.094 ************************************ 00:21:22.094 START TEST nvmf_host_management 00:21:22.094 ************************************ 00:21:22.094 20:38:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:21:22.094 * Looking for test storage... 00:21:22.094 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:22.094 20:38:40 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.094 20:38:40 -- nvmf/common.sh@7 -- # uname -s 00:21:22.094 20:38:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.094 20:38:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.094 20:38:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.094 20:38:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.094 20:38:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.094 20:38:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.094 20:38:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.094 20:38:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.094 20:38:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.094 20:38:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.094 20:38:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:22.094 20:38:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:22.094 20:38:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.094 20:38:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.094 20:38:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:22.094 20:38:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:22.095 20:38:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.095 20:38:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.095 20:38:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.095 20:38:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.095 20:38:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.095 20:38:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.095 20:38:40 -- paths/export.sh@5 -- # export PATH 00:21:22.095 20:38:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.095 20:38:40 -- nvmf/common.sh@46 -- # : 0 00:21:22.095 20:38:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:22.095 20:38:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:22.095 20:38:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:22.095 20:38:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.095 20:38:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.095 20:38:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:22.095 20:38:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:22.095 20:38:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:22.095 20:38:40 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.095 20:38:40 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.095 20:38:40 -- target/host_management.sh@104 -- # nvmftestinit 00:21:22.095 20:38:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:22.095 20:38:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.095 20:38:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:22.095 20:38:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:22.095 20:38:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:22.095 20:38:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.095 20:38:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.095 20:38:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.095 20:38:40 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:22.095 20:38:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:22.095 20:38:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:22.095 20:38:40 -- common/autotest_common.sh@10 -- # set +x 00:21:27.371 20:38:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:27.371 20:38:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:27.371 20:38:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:27.371 20:38:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:27.371 20:38:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:27.371 20:38:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:27.371 20:38:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:27.371 20:38:45 -- nvmf/common.sh@294 -- # net_devs=() 00:21:27.371 20:38:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:27.371 20:38:45 -- nvmf/common.sh@295 -- # e810=() 00:21:27.371 20:38:45 -- nvmf/common.sh@295 -- # local -ga e810 00:21:27.371 20:38:45 -- nvmf/common.sh@296 -- # x722=() 00:21:27.371 20:38:45 -- nvmf/common.sh@296 -- # local -ga x722 00:21:27.371 20:38:45 -- nvmf/common.sh@297 -- # mlx=() 00:21:27.371 20:38:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:27.371 20:38:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.371 20:38:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:27.371 20:38:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:27.371 20:38:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:27.371 20:38:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:27.371 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:27.371 20:38:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:27.371 20:38:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:27.371 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:27.371 20:38:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:27.371 20:38:45 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:27.371 20:38:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.371 20:38:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:27.371 20:38:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.371 20:38:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:27.371 Found net devices under 0000:27:00.0: cvl_0_0 00:21:27.371 20:38:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.371 20:38:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:27.371 20:38:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.371 20:38:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:27.371 20:38:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.371 20:38:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:27.371 Found net devices under 0000:27:00.1: cvl_0_1 00:21:27.371 20:38:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.371 20:38:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:27.371 20:38:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:27.371 20:38:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:27.371 20:38:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:27.371 20:38:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.371 20:38:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.371 20:38:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.371 20:38:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:27.371 20:38:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.371 20:38:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.371 20:38:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:27.372 20:38:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.372 20:38:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.372 20:38:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:27.372 20:38:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:27.372 20:38:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.372 20:38:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.372 20:38:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.372 20:38:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.372 20:38:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:27.372 20:38:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.372 20:38:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.372 20:38:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.372 20:38:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:27.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:21:27.372 00:21:27.372 --- 10.0.0.2 ping statistics --- 00:21:27.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.372 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:21:27.372 20:38:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:21:27.372 00:21:27.372 --- 10.0.0.1 ping statistics --- 00:21:27.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.372 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:21:27.372 20:38:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.372 20:38:45 -- nvmf/common.sh@410 -- # return 0 00:21:27.372 20:38:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:27.372 20:38:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.372 20:38:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:27.372 20:38:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:27.372 20:38:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.372 20:38:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:27.372 20:38:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:27.372 20:38:45 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:21:27.372 20:38:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:27.372 20:38:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:27.372 20:38:45 -- common/autotest_common.sh@10 -- # set +x 00:21:27.372 ************************************ 00:21:27.372 START TEST nvmf_host_management 00:21:27.372 ************************************ 00:21:27.372 20:38:45 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:21:27.372 20:38:45 -- target/host_management.sh@69 -- # starttarget 00:21:27.372 20:38:45 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:21:27.372 20:38:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:27.372 20:38:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:27.372 20:38:45 -- common/autotest_common.sh@10 -- # set +x 00:21:27.372 20:38:45 -- nvmf/common.sh@469 -- # nvmfpid=3554785 00:21:27.372 20:38:45 -- nvmf/common.sh@470 -- # waitforlisten 3554785 00:21:27.372 20:38:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:27.372 20:38:45 -- common/autotest_common.sh@819 -- # '[' -z 3554785 ']' 00:21:27.372 20:38:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.372 20:38:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:27.372 20:38:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.372 20:38:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:27.372 20:38:45 -- common/autotest_common.sh@10 -- # set +x 00:21:27.372 [2024-04-26 20:38:45.619804] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:27.372 [2024-04-26 20:38:45.619884] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.372 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.633 [2024-04-26 20:38:45.714813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.633 [2024-04-26 20:38:45.813953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:27.633 [2024-04-26 20:38:45.814137] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.633 [2024-04-26 20:38:45.814152] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.633 [2024-04-26 20:38:45.814161] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.633 [2024-04-26 20:38:45.814319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.633 [2024-04-26 20:38:45.814438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.633 [2024-04-26 20:38:45.814562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.633 [2024-04-26 20:38:45.814590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.202 20:38:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:28.203 20:38:46 -- common/autotest_common.sh@852 -- # return 0 00:21:28.203 20:38:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:28.203 20:38:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:28.203 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:21:28.203 20:38:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.203 20:38:46 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.203 20:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:28.203 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:21:28.203 [2024-04-26 20:38:46.394871] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.203 20:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:28.203 20:38:46 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:21:28.203 20:38:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:28.203 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:21:28.203 20:38:46 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:28.203 20:38:46 -- target/host_management.sh@23 -- # cat 00:21:28.203 20:38:46 -- target/host_management.sh@30 -- # rpc_cmd 00:21:28.203 20:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:28.203 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:21:28.203 Malloc0 00:21:28.203 [2024-04-26 20:38:46.474102] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.203 20:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:28.203 20:38:46 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:21:28.203 20:38:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:28.203 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:21:28.203 20:38:46 -- target/host_management.sh@73 -- # perfpid=3555101 00:21:28.203 20:38:46 -- target/host_management.sh@74 -- # waitforlisten 3555101 /var/tmp/bdevperf.sock 00:21:28.203 20:38:46 -- common/autotest_common.sh@819 -- # '[' -z 3555101 ']' 00:21:28.203 20:38:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.203 20:38:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:28.203 20:38:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.203 20:38:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:28.203 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:21:28.203 20:38:46 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:21:28.203 20:38:46 -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:28.203 20:38:46 -- nvmf/common.sh@520 -- # config=() 00:21:28.203 20:38:46 -- nvmf/common.sh@520 -- # local subsystem config 00:21:28.203 20:38:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:28.203 20:38:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:28.203 { 00:21:28.203 "params": { 00:21:28.203 "name": "Nvme$subsystem", 00:21:28.203 "trtype": "$TEST_TRANSPORT", 00:21:28.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.203 "adrfam": "ipv4", 00:21:28.203 "trsvcid": "$NVMF_PORT", 00:21:28.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.203 "hdgst": ${hdgst:-false}, 00:21:28.203 "ddgst": ${ddgst:-false} 00:21:28.203 }, 00:21:28.203 "method": "bdev_nvme_attach_controller" 00:21:28.203 } 00:21:28.203 EOF 00:21:28.203 )") 00:21:28.203 20:38:46 -- nvmf/common.sh@542 -- # cat 00:21:28.203 20:38:46 -- nvmf/common.sh@544 -- # jq . 00:21:28.203 20:38:46 -- nvmf/common.sh@545 -- # IFS=, 00:21:28.203 20:38:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:28.203 "params": { 00:21:28.203 "name": "Nvme0", 00:21:28.203 "trtype": "tcp", 00:21:28.203 "traddr": "10.0.0.2", 00:21:28.203 "adrfam": "ipv4", 00:21:28.203 "trsvcid": "4420", 00:21:28.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:28.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:28.203 "hdgst": false, 00:21:28.203 "ddgst": false 00:21:28.203 }, 00:21:28.203 "method": "bdev_nvme_attach_controller" 00:21:28.203 }' 00:21:28.464 [2024-04-26 20:38:46.585675] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:28.464 [2024-04-26 20:38:46.585792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555101 ] 00:21:28.464 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.464 [2024-04-26 20:38:46.698838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.464 [2024-04-26 20:38:46.787555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.722 Running I/O for 10 seconds... 00:21:28.982 20:38:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:28.982 20:38:47 -- common/autotest_common.sh@852 -- # return 0 00:21:28.982 20:38:47 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:28.982 20:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:28.982 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:21:28.982 20:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:28.982 20:38:47 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:28.982 20:38:47 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:21:28.982 20:38:47 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:28.982 20:38:47 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:21:28.982 20:38:47 -- target/host_management.sh@52 -- # local ret=1 00:21:28.982 20:38:47 -- target/host_management.sh@53 -- # local i 00:21:28.982 20:38:47 -- target/host_management.sh@54 -- # (( i = 10 )) 00:21:28.982 20:38:47 -- target/host_management.sh@54 -- # (( i != 0 )) 00:21:28.982 20:38:47 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:21:28.982 20:38:47 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:21:28.982 20:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:28.982 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:21:28.982 20:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:29.245 20:38:47 -- target/host_management.sh@55 -- # read_io_count=782 00:21:29.245 20:38:47 -- target/host_management.sh@58 -- # '[' 782 -ge 100 ']' 00:21:29.245 20:38:47 -- target/host_management.sh@59 -- # ret=0 00:21:29.245 20:38:47 -- target/host_management.sh@60 -- # break 00:21:29.245 20:38:47 -- target/host_management.sh@64 -- # return 0 00:21:29.245 20:38:47 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:29.245 20:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:29.245 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:21:29.246 [2024-04-26 20:38:47.343611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 [2024-04-26 20:38:47.343750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:29.246 20:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:29.246 [2024-04-26 20:38:47.348269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 20:38:47 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:29.246 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 [2024-04-26 20:38:47.348939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.246 20:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:29.246 [2024-04-26 20:38:47.348957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.246 [2024-04-26 20:38:47.348968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.348978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.348986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.348996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:21:29.247 [2024-04-26 20:38:47.349168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.247 [2024-04-26 20:38:47.349571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.349724] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000003d80 was disconnected and freed. reset controller. 00:21:29.247 [2024-04-26 20:38:47.350630] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:29.247 task offset: 118272 on job bdev=Nvme0n1 fails 00:21:29.247 00:21:29.247 Latency(us) 00:21:29.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.247 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.247 Job: Nvme0n1 ended in about 0.30 seconds with error 00:21:29.247 Verification LBA range: start 0x0 length 0x400 00:21:29.247 Nvme0n1 : 0.30 3010.30 188.14 212.88 0.00 19546.37 1819.49 24972.67 00:21:29.247 =================================================================================================================== 00:21:29.247 Total : 3010.30 188.14 212.88 0.00 19546.37 1819.49 24972.67 00:21:29.247 [2024-04-26 20:38:47.353153] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:29.247 [2024-04-26 20:38:47.353191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:21:29.247 [2024-04-26 20:38:47.355121] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:21:29.247 [2024-04-26 20:38:47.355323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:29.247 [2024-04-26 20:38:47.355351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.247 [2024-04-26 20:38:47.355372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:21:29.247 [2024-04-26 20:38:47.355390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:21:29.248 [2024-04-26 20:38:47.355402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.248 [2024-04-26 20:38:47.355412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x613000003140 00:21:29.248 [2024-04-26 20:38:47.355437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:21:29.248 [2024-04-26 20:38:47.355452] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:29.248 [2024-04-26 20:38:47.355462] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:29.248 [2024-04-26 20:38:47.355474] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:29.248 [2024-04-26 20:38:47.355493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:29.248 20:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:29.248 20:38:47 -- target/host_management.sh@87 -- # sleep 1 00:21:30.186 20:38:48 -- target/host_management.sh@91 -- # kill -9 3555101 00:21:30.186 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3555101) - No such process 00:21:30.186 20:38:48 -- target/host_management.sh@91 -- # true 00:21:30.186 20:38:48 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:21:30.186 20:38:48 -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:30.186 20:38:48 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:21:30.186 20:38:48 -- nvmf/common.sh@520 -- # config=() 00:21:30.186 20:38:48 -- nvmf/common.sh@520 -- # local subsystem config 00:21:30.186 20:38:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.186 20:38:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.186 { 00:21:30.186 "params": { 00:21:30.186 "name": "Nvme$subsystem", 00:21:30.186 "trtype": "$TEST_TRANSPORT", 00:21:30.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.186 "adrfam": "ipv4", 00:21:30.186 "trsvcid": "$NVMF_PORT", 00:21:30.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.186 "hdgst": ${hdgst:-false}, 00:21:30.186 "ddgst": ${ddgst:-false} 00:21:30.186 }, 00:21:30.186 "method": "bdev_nvme_attach_controller" 00:21:30.186 } 00:21:30.186 EOF 00:21:30.186 )") 00:21:30.186 20:38:48 -- nvmf/common.sh@542 -- # cat 00:21:30.186 20:38:48 -- nvmf/common.sh@544 -- # jq . 00:21:30.186 20:38:48 -- nvmf/common.sh@545 -- # IFS=, 00:21:30.186 20:38:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:30.186 "params": { 00:21:30.186 "name": "Nvme0", 00:21:30.186 "trtype": "tcp", 00:21:30.186 "traddr": "10.0.0.2", 00:21:30.186 "adrfam": "ipv4", 00:21:30.186 "trsvcid": "4420", 00:21:30.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:30.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:30.186 "hdgst": false, 00:21:30.186 "ddgst": false 00:21:30.186 }, 00:21:30.186 "method": "bdev_nvme_attach_controller" 00:21:30.186 }' 00:21:30.186 [2024-04-26 20:38:48.435409] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:30.186 [2024-04-26 20:38:48.435522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555444 ] 00:21:30.186 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.445 [2024-04-26 20:38:48.548676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.445 [2024-04-26 20:38:48.637479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.712 Running I/O for 1 seconds... 00:21:31.652 00:21:31.652 Latency(us) 00:21:31.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.652 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.652 Verification LBA range: start 0x0 length 0x400 00:21:31.652 Nvme0n1 : 1.05 3475.98 217.25 0.00 0.00 17492.60 1845.36 43874.63 00:21:31.652 =================================================================================================================== 00:21:31.652 Total : 3475.98 217.25 0.00 0.00 17492.60 1845.36 43874.63 00:21:32.222 20:38:50 -- target/host_management.sh@101 -- # stoptarget 00:21:32.222 20:38:50 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:21:32.222 20:38:50 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.222 20:38:50 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.222 20:38:50 -- target/host_management.sh@40 -- # nvmftestfini 00:21:32.222 20:38:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:32.222 20:38:50 -- nvmf/common.sh@116 -- # sync 00:21:32.222 20:38:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:32.222 20:38:50 -- nvmf/common.sh@119 -- # set +e 00:21:32.222 20:38:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:32.222 20:38:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:32.222 rmmod nvme_tcp 00:21:32.222 rmmod nvme_fabrics 00:21:32.222 rmmod nvme_keyring 00:21:32.222 20:38:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:32.222 20:38:50 -- nvmf/common.sh@123 -- # set -e 00:21:32.222 20:38:50 -- nvmf/common.sh@124 -- # return 0 00:21:32.222 20:38:50 -- nvmf/common.sh@477 -- # '[' -n 3554785 ']' 00:21:32.222 20:38:50 -- nvmf/common.sh@478 -- # killprocess 3554785 00:21:32.222 20:38:50 -- common/autotest_common.sh@926 -- # '[' -z 3554785 ']' 00:21:32.222 20:38:50 -- common/autotest_common.sh@930 -- # kill -0 3554785 00:21:32.222 20:38:50 -- common/autotest_common.sh@931 -- # uname 00:21:32.222 20:38:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:32.222 20:38:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3554785 00:21:32.222 20:38:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:32.222 20:38:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:32.222 20:38:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3554785' 00:21:32.222 killing process with pid 3554785 00:21:32.222 20:38:50 -- common/autotest_common.sh@945 -- # kill 3554785 00:21:32.222 20:38:50 -- common/autotest_common.sh@950 -- # wait 3554785 00:21:32.787 [2024-04-26 20:38:50.901623] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:21:32.787 20:38:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:32.787 20:38:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:32.787 20:38:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:32.787 20:38:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.787 20:38:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:32.787 20:38:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.787 20:38:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.787 20:38:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.696 20:38:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:34.696 00:21:34.696 real 0m7.457s 00:21:34.696 user 0m22.921s 00:21:34.696 sys 0m1.128s 00:21:34.696 20:38:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.696 20:38:53 -- common/autotest_common.sh@10 -- # set +x 00:21:34.696 ************************************ 00:21:34.696 END TEST nvmf_host_management 00:21:34.696 ************************************ 00:21:34.957 20:38:53 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:34.957 00:21:34.957 real 0m13.099s 00:21:34.957 user 0m24.459s 00:21:34.957 sys 0m5.218s 00:21:34.957 20:38:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.957 20:38:53 -- common/autotest_common.sh@10 -- # set +x 00:21:34.957 ************************************ 00:21:34.957 END TEST nvmf_host_management 00:21:34.957 ************************************ 00:21:34.957 20:38:53 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:34.957 20:38:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:34.957 20:38:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:34.957 20:38:53 -- common/autotest_common.sh@10 -- # set +x 00:21:34.957 ************************************ 00:21:34.957 START TEST nvmf_lvol 00:21:34.957 ************************************ 00:21:34.957 20:38:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:34.957 * Looking for test storage... 00:21:34.957 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:34.957 20:38:53 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.957 20:38:53 -- nvmf/common.sh@7 -- # uname -s 00:21:34.957 20:38:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.957 20:38:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.957 20:38:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.957 20:38:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.957 20:38:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.957 20:38:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.957 20:38:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.957 20:38:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.957 20:38:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.957 20:38:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.958 20:38:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:34.958 20:38:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:34.958 20:38:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.958 20:38:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.958 20:38:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:34.958 20:38:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:34.958 20:38:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.958 20:38:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.958 20:38:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.958 20:38:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.958 20:38:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.958 20:38:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.958 20:38:53 -- paths/export.sh@5 -- # export PATH 00:21:34.958 20:38:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.958 20:38:53 -- nvmf/common.sh@46 -- # : 0 00:21:34.958 20:38:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:34.958 20:38:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:34.958 20:38:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:34.958 20:38:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.958 20:38:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.958 20:38:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:34.958 20:38:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:34.958 20:38:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:34.958 20:38:53 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.958 20:38:53 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.958 20:38:53 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:21:34.958 20:38:53 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:21:34.958 20:38:53 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:34.958 20:38:53 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:21:34.958 20:38:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:34.958 20:38:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.958 20:38:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:34.958 20:38:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:34.958 20:38:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:34.958 20:38:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.958 20:38:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.958 20:38:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.958 20:38:53 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:34.958 20:38:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:34.958 20:38:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:34.958 20:38:53 -- common/autotest_common.sh@10 -- # set +x 00:21:40.233 20:38:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:40.233 20:38:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:40.233 20:38:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:40.233 20:38:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:40.234 20:38:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:40.234 20:38:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:40.234 20:38:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:40.234 20:38:58 -- nvmf/common.sh@294 -- # net_devs=() 00:21:40.234 20:38:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:40.234 20:38:58 -- nvmf/common.sh@295 -- # e810=() 00:21:40.234 20:38:58 -- nvmf/common.sh@295 -- # local -ga e810 00:21:40.234 20:38:58 -- nvmf/common.sh@296 -- # x722=() 00:21:40.234 20:38:58 -- nvmf/common.sh@296 -- # local -ga x722 00:21:40.234 20:38:58 -- nvmf/common.sh@297 -- # mlx=() 00:21:40.234 20:38:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:40.234 20:38:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.234 20:38:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:40.234 20:38:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:40.234 20:38:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:40.234 20:38:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:40.234 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:40.234 20:38:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:40.234 20:38:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:40.234 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:40.234 20:38:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:40.234 20:38:58 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:40.234 20:38:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.234 20:38:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:40.234 20:38:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.234 20:38:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:40.234 Found net devices under 0000:27:00.0: cvl_0_0 00:21:40.234 20:38:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.234 20:38:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:40.234 20:38:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.234 20:38:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:40.234 20:38:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.234 20:38:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:40.234 Found net devices under 0000:27:00.1: cvl_0_1 00:21:40.234 20:38:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.234 20:38:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:40.234 20:38:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:40.234 20:38:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:40.234 20:38:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.234 20:38:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.234 20:38:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.234 20:38:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:40.234 20:38:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.234 20:38:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.234 20:38:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:40.234 20:38:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.234 20:38:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.234 20:38:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:40.234 20:38:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:40.234 20:38:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.234 20:38:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.234 20:38:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.234 20:38:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.234 20:38:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:40.234 20:38:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.234 20:38:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.234 20:38:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.234 20:38:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:40.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:21:40.234 00:21:40.234 --- 10.0.0.2 ping statistics --- 00:21:40.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.234 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:21:40.234 20:38:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:21:40.234 00:21:40.234 --- 10.0.0.1 ping statistics --- 00:21:40.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.234 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:21:40.234 20:38:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.234 20:38:58 -- nvmf/common.sh@410 -- # return 0 00:21:40.234 20:38:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:40.234 20:38:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.234 20:38:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:40.234 20:38:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.234 20:38:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:40.234 20:38:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:40.234 20:38:58 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:21:40.234 20:38:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:40.234 20:38:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:40.234 20:38:58 -- common/autotest_common.sh@10 -- # set +x 00:21:40.234 20:38:58 -- nvmf/common.sh@469 -- # nvmfpid=3559672 00:21:40.234 20:38:58 -- nvmf/common.sh@470 -- # waitforlisten 3559672 00:21:40.234 20:38:58 -- common/autotest_common.sh@819 -- # '[' -z 3559672 ']' 00:21:40.234 20:38:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.234 20:38:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:40.234 20:38:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.234 20:38:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:40.234 20:38:58 -- common/autotest_common.sh@10 -- # set +x 00:21:40.234 20:38:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:40.234 [2024-04-26 20:38:58.377392] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:40.234 [2024-04-26 20:38:58.377495] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.234 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.234 [2024-04-26 20:38:58.496116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:40.494 [2024-04-26 20:38:58.590888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:40.494 [2024-04-26 20:38:58.591067] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.494 [2024-04-26 20:38:58.591081] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.494 [2024-04-26 20:38:58.591090] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.494 [2024-04-26 20:38:58.591166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.494 [2024-04-26 20:38:58.591281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.494 [2024-04-26 20:38:58.591285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.754 20:38:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:40.754 20:38:59 -- common/autotest_common.sh@852 -- # return 0 00:21:40.754 20:38:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:40.754 20:38:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:40.754 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:21:41.012 20:38:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.012 20:38:59 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:41.012 [2024-04-26 20:38:59.250424] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.012 20:38:59 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:41.270 20:38:59 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:21:41.270 20:38:59 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:41.270 20:38:59 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:21:41.270 20:38:59 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:21:41.529 20:38:59 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:21:41.788 20:38:59 -- target/nvmf_lvol.sh@29 -- # lvs=5ca06fef-2a60-4d23-a719-42142af5eb20 00:21:41.788 20:38:59 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5ca06fef-2a60-4d23-a719-42142af5eb20 lvol 20 00:21:41.788 20:39:00 -- target/nvmf_lvol.sh@32 -- # lvol=f8866979-ba9f-4c0a-89f6-1dd88a8c1699 00:21:41.788 20:39:00 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:42.047 20:39:00 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f8866979-ba9f-4c0a-89f6-1dd88a8c1699 00:21:42.047 20:39:00 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:42.306 [2024-04-26 20:39:00.445991] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.306 20:39:00 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.306 20:39:00 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:21:42.306 20:39:00 -- target/nvmf_lvol.sh@42 -- # perf_pid=3560293 00:21:42.306 20:39:00 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:21:42.566 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.500 20:39:01 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f8866979-ba9f-4c0a-89f6-1dd88a8c1699 MY_SNAPSHOT 00:21:43.500 20:39:01 -- target/nvmf_lvol.sh@47 -- # snapshot=08818147-27d1-418d-8974-fe8b9c2c9c89 00:21:43.500 20:39:01 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f8866979-ba9f-4c0a-89f6-1dd88a8c1699 30 00:21:43.759 20:39:01 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 08818147-27d1-418d-8974-fe8b9c2c9c89 MY_CLONE 00:21:44.078 20:39:02 -- target/nvmf_lvol.sh@49 -- # clone=0dcc4bfd-5091-4d99-9573-81e5b3fd839a 00:21:44.078 20:39:02 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0dcc4bfd-5091-4d99-9573-81e5b3fd839a 00:21:44.345 20:39:02 -- target/nvmf_lvol.sh@53 -- # wait 3560293 00:21:54.325 Initializing NVMe Controllers 00:21:54.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:54.325 Controller IO queue size 128, less than required. 00:21:54.325 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:54.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:21:54.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:21:54.325 Initialization complete. Launching workers. 00:21:54.325 ======================================================== 00:21:54.325 Latency(us) 00:21:54.325 Device Information : IOPS MiB/s Average min max 00:21:54.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14214.40 55.52 9009.96 315.80 81983.92 00:21:54.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13831.90 54.03 9255.41 1565.59 68870.85 00:21:54.325 ======================================================== 00:21:54.325 Total : 28046.30 109.56 9131.01 315.80 81983.92 00:21:54.325 00:21:54.325 20:39:10 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:54.325 20:39:11 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f8866979-ba9f-4c0a-89f6-1dd88a8c1699 00:21:54.325 20:39:11 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ca06fef-2a60-4d23-a719-42142af5eb20 00:21:54.325 20:39:11 -- target/nvmf_lvol.sh@60 -- # rm -f 00:21:54.325 20:39:11 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:21:54.325 20:39:11 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:21:54.325 20:39:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:54.325 20:39:11 -- nvmf/common.sh@116 -- # sync 00:21:54.325 20:39:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:54.325 20:39:11 -- nvmf/common.sh@119 -- # set +e 00:21:54.325 20:39:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:54.325 20:39:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:54.325 rmmod nvme_tcp 00:21:54.325 rmmod nvme_fabrics 00:21:54.325 rmmod nvme_keyring 00:21:54.325 20:39:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:54.325 20:39:11 -- nvmf/common.sh@123 -- # set -e 00:21:54.325 20:39:11 -- nvmf/common.sh@124 -- # return 0 00:21:54.325 20:39:11 -- nvmf/common.sh@477 -- # '[' -n 3559672 ']' 00:21:54.325 20:39:11 -- nvmf/common.sh@478 -- # killprocess 3559672 00:21:54.325 20:39:11 -- common/autotest_common.sh@926 -- # '[' -z 3559672 ']' 00:21:54.325 20:39:11 -- common/autotest_common.sh@930 -- # kill -0 3559672 00:21:54.325 20:39:11 -- common/autotest_common.sh@931 -- # uname 00:21:54.325 20:39:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:54.325 20:39:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3559672 00:21:54.325 20:39:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:54.325 20:39:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:54.325 20:39:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3559672' 00:21:54.325 killing process with pid 3559672 00:21:54.325 20:39:11 -- common/autotest_common.sh@945 -- # kill 3559672 00:21:54.325 20:39:11 -- common/autotest_common.sh@950 -- # wait 3559672 00:21:54.325 20:39:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:54.325 20:39:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:54.325 20:39:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:54.325 20:39:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.325 20:39:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:54.325 20:39:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.325 20:39:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.325 20:39:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.235 20:39:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:56.235 00:21:56.235 real 0m20.998s 00:21:56.235 user 1m2.409s 00:21:56.235 sys 0m5.799s 00:21:56.235 20:39:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.235 20:39:14 -- common/autotest_common.sh@10 -- # set +x 00:21:56.235 ************************************ 00:21:56.235 END TEST nvmf_lvol 00:21:56.235 ************************************ 00:21:56.235 20:39:14 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:56.235 20:39:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:56.235 20:39:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:56.235 20:39:14 -- common/autotest_common.sh@10 -- # set +x 00:21:56.235 ************************************ 00:21:56.235 START TEST nvmf_lvs_grow 00:21:56.235 ************************************ 00:21:56.235 20:39:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:56.235 * Looking for test storage... 00:21:56.235 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:56.235 20:39:14 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.235 20:39:14 -- nvmf/common.sh@7 -- # uname -s 00:21:56.235 20:39:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.235 20:39:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.235 20:39:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.235 20:39:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.235 20:39:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.235 20:39:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.235 20:39:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.235 20:39:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.235 20:39:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.235 20:39:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.235 20:39:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:56.235 20:39:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:56.235 20:39:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.235 20:39:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.235 20:39:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:56.235 20:39:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:56.235 20:39:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.235 20:39:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.235 20:39:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.235 20:39:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.235 20:39:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.236 20:39:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.236 20:39:14 -- paths/export.sh@5 -- # export PATH 00:21:56.236 20:39:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.236 20:39:14 -- nvmf/common.sh@46 -- # : 0 00:21:56.236 20:39:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:56.236 20:39:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:56.236 20:39:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:56.236 20:39:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.236 20:39:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.236 20:39:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:56.236 20:39:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:56.236 20:39:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:56.236 20:39:14 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:56.236 20:39:14 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.236 20:39:14 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:21:56.236 20:39:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:56.236 20:39:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.236 20:39:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:56.236 20:39:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:56.236 20:39:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:56.236 20:39:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.236 20:39:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.236 20:39:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.236 20:39:14 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:56.236 20:39:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:56.236 20:39:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:56.236 20:39:14 -- common/autotest_common.sh@10 -- # set +x 00:22:02.815 20:39:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:02.815 20:39:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:02.815 20:39:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:02.815 20:39:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:02.815 20:39:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:02.815 20:39:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:02.815 20:39:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:02.815 20:39:20 -- nvmf/common.sh@294 -- # net_devs=() 00:22:02.815 20:39:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:02.815 20:39:20 -- nvmf/common.sh@295 -- # e810=() 00:22:02.815 20:39:20 -- nvmf/common.sh@295 -- # local -ga e810 00:22:02.815 20:39:20 -- nvmf/common.sh@296 -- # x722=() 00:22:02.815 20:39:20 -- nvmf/common.sh@296 -- # local -ga x722 00:22:02.815 20:39:20 -- nvmf/common.sh@297 -- # mlx=() 00:22:02.815 20:39:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:02.815 20:39:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.815 20:39:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:02.815 20:39:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:02.815 20:39:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:02.815 20:39:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:02.815 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:02.815 20:39:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:02.815 20:39:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:02.815 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:02.815 20:39:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:02.815 20:39:20 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:02.815 20:39:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.815 20:39:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:02.815 20:39:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.815 20:39:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:02.815 Found net devices under 0000:27:00.0: cvl_0_0 00:22:02.815 20:39:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.815 20:39:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:02.815 20:39:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.815 20:39:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:02.815 20:39:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.815 20:39:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:02.815 Found net devices under 0000:27:00.1: cvl_0_1 00:22:02.815 20:39:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.815 20:39:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:02.815 20:39:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:02.815 20:39:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:02.815 20:39:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.815 20:39:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.815 20:39:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.815 20:39:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:02.815 20:39:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.815 20:39:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.815 20:39:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:02.815 20:39:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.815 20:39:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.815 20:39:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:02.815 20:39:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:02.815 20:39:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.815 20:39:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.815 20:39:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.815 20:39:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.815 20:39:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:02.815 20:39:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.815 20:39:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.815 20:39:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.815 20:39:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:02.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:22:02.815 00:22:02.815 --- 10.0.0.2 ping statistics --- 00:22:02.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.815 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:22:02.815 20:39:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:22:02.815 00:22:02.815 --- 10.0.0.1 ping statistics --- 00:22:02.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.815 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:22:02.815 20:39:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.815 20:39:20 -- nvmf/common.sh@410 -- # return 0 00:22:02.815 20:39:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:02.815 20:39:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.815 20:39:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:02.815 20:39:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.815 20:39:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:02.815 20:39:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:02.815 20:39:21 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:22:02.815 20:39:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:02.815 20:39:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:02.815 20:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:02.815 20:39:21 -- nvmf/common.sh@469 -- # nvmfpid=3567160 00:22:02.815 20:39:21 -- nvmf/common.sh@470 -- # waitforlisten 3567160 00:22:02.815 20:39:21 -- common/autotest_common.sh@819 -- # '[' -z 3567160 ']' 00:22:02.815 20:39:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.815 20:39:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:02.815 20:39:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.815 20:39:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:02.815 20:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:02.815 20:39:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:02.815 [2024-04-26 20:39:21.102241] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:02.815 [2024-04-26 20:39:21.102369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.076 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.076 [2024-04-26 20:39:21.239821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.076 [2024-04-26 20:39:21.332125] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:03.076 [2024-04-26 20:39:21.332323] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.076 [2024-04-26 20:39:21.332337] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.076 [2024-04-26 20:39:21.332347] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.076 [2024-04-26 20:39:21.332391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.649 20:39:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:03.649 20:39:21 -- common/autotest_common.sh@852 -- # return 0 00:22:03.649 20:39:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:03.649 20:39:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:03.649 20:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:03.649 20:39:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.649 20:39:21 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:03.649 [2024-04-26 20:39:21.983237] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:22:03.911 20:39:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:03.911 20:39:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:03.911 20:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:03.911 ************************************ 00:22:03.911 START TEST lvs_grow_clean 00:22:03.911 ************************************ 00:22:03.911 20:39:22 -- common/autotest_common.sh@1104 -- # lvs_grow 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:22:03.911 20:39:22 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:22:04.173 20:39:22 -- target/nvmf_lvs_grow.sh@28 -- # lvs=24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:04.173 20:39:22 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:04.173 20:39:22 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:22:04.173 20:39:22 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:22:04.173 20:39:22 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:22:04.173 20:39:22 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 lvol 150 00:22:04.432 20:39:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=205a4db8-6a26-4c82-90b9-da18536e482e 00:22:04.432 20:39:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:04.432 20:39:22 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:22:04.432 [2024-04-26 20:39:22.717184] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:22:04.432 [2024-04-26 20:39:22.717272] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:22:04.432 true 00:22:04.432 20:39:22 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:04.432 20:39:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:22:04.690 20:39:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:22:04.690 20:39:22 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:04.690 20:39:22 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 205a4db8-6a26-4c82-90b9-da18536e482e 00:22:04.947 20:39:23 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:04.947 [2024-04-26 20:39:23.241628] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.947 20:39:23 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:05.205 20:39:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3567652 00:22:05.205 20:39:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.205 20:39:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3567652 /var/tmp/bdevperf.sock 00:22:05.205 20:39:23 -- common/autotest_common.sh@819 -- # '[' -z 3567652 ']' 00:22:05.206 20:39:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.206 20:39:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:05.206 20:39:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.206 20:39:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:05.206 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:22:05.206 20:39:23 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:22:05.206 [2024-04-26 20:39:23.450430] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:05.206 [2024-04-26 20:39:23.450549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3567652 ] 00:22:05.206 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.465 [2024-04-26 20:39:23.563026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.465 [2024-04-26 20:39:23.652580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.037 20:39:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.037 20:39:24 -- common/autotest_common.sh@852 -- # return 0 00:22:06.037 20:39:24 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:22:06.037 Nvme0n1 00:22:06.296 20:39:24 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:22:06.296 [ 00:22:06.296 { 00:22:06.296 "name": "Nvme0n1", 00:22:06.296 "aliases": [ 00:22:06.296 "205a4db8-6a26-4c82-90b9-da18536e482e" 00:22:06.296 ], 00:22:06.296 "product_name": "NVMe disk", 00:22:06.296 "block_size": 4096, 00:22:06.296 "num_blocks": 38912, 00:22:06.296 "uuid": "205a4db8-6a26-4c82-90b9-da18536e482e", 00:22:06.296 "assigned_rate_limits": { 00:22:06.296 "rw_ios_per_sec": 0, 00:22:06.296 "rw_mbytes_per_sec": 0, 00:22:06.296 "r_mbytes_per_sec": 0, 00:22:06.296 "w_mbytes_per_sec": 0 00:22:06.296 }, 00:22:06.296 "claimed": false, 00:22:06.296 "zoned": false, 00:22:06.296 "supported_io_types": { 00:22:06.296 "read": true, 00:22:06.296 "write": true, 00:22:06.296 "unmap": true, 00:22:06.296 "write_zeroes": true, 00:22:06.296 "flush": true, 00:22:06.296 "reset": true, 00:22:06.296 "compare": true, 00:22:06.296 "compare_and_write": true, 00:22:06.296 "abort": true, 00:22:06.296 "nvme_admin": true, 00:22:06.296 "nvme_io": true 00:22:06.296 }, 00:22:06.296 "driver_specific": { 00:22:06.296 "nvme": [ 00:22:06.296 { 00:22:06.296 "trid": { 00:22:06.296 "trtype": "TCP", 00:22:06.296 "adrfam": "IPv4", 00:22:06.296 "traddr": "10.0.0.2", 00:22:06.296 "trsvcid": "4420", 00:22:06.296 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:06.296 }, 00:22:06.296 "ctrlr_data": { 00:22:06.296 "cntlid": 1, 00:22:06.296 "vendor_id": "0x8086", 00:22:06.296 "model_number": "SPDK bdev Controller", 00:22:06.296 "serial_number": "SPDK0", 00:22:06.296 "firmware_revision": "24.01.1", 00:22:06.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:06.296 "oacs": { 00:22:06.296 "security": 0, 00:22:06.296 "format": 0, 00:22:06.296 "firmware": 0, 00:22:06.296 "ns_manage": 0 00:22:06.296 }, 00:22:06.296 "multi_ctrlr": true, 00:22:06.296 "ana_reporting": false 00:22:06.296 }, 00:22:06.296 "vs": { 00:22:06.296 "nvme_version": "1.3" 00:22:06.296 }, 00:22:06.296 "ns_data": { 00:22:06.296 "id": 1, 00:22:06.296 "can_share": true 00:22:06.296 } 00:22:06.296 } 00:22:06.296 ], 00:22:06.296 "mp_policy": "active_passive" 00:22:06.296 } 00:22:06.296 } 00:22:06.296 ] 00:22:06.296 20:39:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3567811 00:22:06.296 20:39:24 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:06.296 20:39:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:22:06.296 Running I/O for 10 seconds... 00:22:07.675 Latency(us) 00:22:07.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:07.675 Nvme0n1 : 1.00 23599.00 92.18 0.00 0.00 0.00 0.00 0.00 00:22:07.675 =================================================================================================================== 00:22:07.675 Total : 23599.00 92.18 0.00 0.00 0.00 0.00 0.00 00:22:07.675 00:22:08.245 20:39:26 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:08.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:08.504 Nvme0n1 : 2.00 23823.00 93.06 0.00 0.00 0.00 0.00 0.00 00:22:08.504 =================================================================================================================== 00:22:08.504 Total : 23823.00 93.06 0.00 0.00 0.00 0.00 0.00 00:22:08.504 00:22:08.504 true 00:22:08.504 20:39:26 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:08.504 20:39:26 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:08.504 20:39:26 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:08.504 20:39:26 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:08.504 20:39:26 -- target/nvmf_lvs_grow.sh@65 -- # wait 3567811 00:22:09.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:09.439 Nvme0n1 : 3.00 23911.33 93.40 0.00 0.00 0.00 0.00 0.00 00:22:09.439 =================================================================================================================== 00:22:09.439 Total : 23911.33 93.40 0.00 0.00 0.00 0.00 0.00 00:22:09.439 00:22:10.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:10.375 Nvme0n1 : 4.00 23959.50 93.59 0.00 0.00 0.00 0.00 0.00 00:22:10.375 =================================================================================================================== 00:22:10.375 Total : 23959.50 93.59 0.00 0.00 0.00 0.00 0.00 00:22:10.375 00:22:11.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:11.358 Nvme0n1 : 5.00 24022.60 93.84 0.00 0.00 0.00 0.00 0.00 00:22:11.358 =================================================================================================================== 00:22:11.358 Total : 24022.60 93.84 0.00 0.00 0.00 0.00 0.00 00:22:11.358 00:22:12.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:12.293 Nvme0n1 : 6.00 23995.17 93.73 0.00 0.00 0.00 0.00 0.00 00:22:12.293 =================================================================================================================== 00:22:12.293 Total : 23995.17 93.73 0.00 0.00 0.00 0.00 0.00 00:22:12.293 00:22:13.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:13.671 Nvme0n1 : 7.00 24016.14 93.81 0.00 0.00 0.00 0.00 0.00 00:22:13.671 =================================================================================================================== 00:22:13.671 Total : 24016.14 93.81 0.00 0.00 0.00 0.00 0.00 00:22:13.671 00:22:14.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:14.605 Nvme0n1 : 8.00 24036.12 93.89 0.00 0.00 0.00 0.00 0.00 00:22:14.605 =================================================================================================================== 00:22:14.605 Total : 24036.12 93.89 0.00 0.00 0.00 0.00 0.00 00:22:14.605 00:22:15.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:15.560 Nvme0n1 : 9.00 24067.78 94.01 0.00 0.00 0.00 0.00 0.00 00:22:15.560 =================================================================================================================== 00:22:15.560 Total : 24067.78 94.01 0.00 0.00 0.00 0.00 0.00 00:22:15.560 00:22:16.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:16.495 Nvme0n1 : 10.00 24086.30 94.09 0.00 0.00 0.00 0.00 0.00 00:22:16.495 =================================================================================================================== 00:22:16.495 Total : 24086.30 94.09 0.00 0.00 0.00 0.00 0.00 00:22:16.495 00:22:16.495 00:22:16.495 Latency(us) 00:22:16.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:16.495 Nvme0n1 : 10.00 24082.08 94.07 0.00 0.00 5311.99 1974.70 12555.32 00:22:16.495 =================================================================================================================== 00:22:16.495 Total : 24082.08 94.07 0.00 0.00 5311.99 1974.70 12555.32 00:22:16.495 0 00:22:16.495 20:39:34 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3567652 00:22:16.495 20:39:34 -- common/autotest_common.sh@926 -- # '[' -z 3567652 ']' 00:22:16.495 20:39:34 -- common/autotest_common.sh@930 -- # kill -0 3567652 00:22:16.495 20:39:34 -- common/autotest_common.sh@931 -- # uname 00:22:16.495 20:39:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:16.495 20:39:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3567652 00:22:16.495 20:39:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:16.495 20:39:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:16.495 20:39:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3567652' 00:22:16.495 killing process with pid 3567652 00:22:16.495 20:39:34 -- common/autotest_common.sh@945 -- # kill 3567652 00:22:16.495 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.495 00:22:16.495 Latency(us) 00:22:16.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.495 =================================================================================================================== 00:22:16.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.495 20:39:34 -- common/autotest_common.sh@950 -- # wait 3567652 00:22:16.755 20:39:35 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:17.016 20:39:35 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:17.016 20:39:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:22:17.016 20:39:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:22:17.016 20:39:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:22:17.016 20:39:35 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:17.277 [2024-04-26 20:39:35.399991] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:22:17.277 20:39:35 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:17.277 20:39:35 -- common/autotest_common.sh@640 -- # local es=0 00:22:17.277 20:39:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:17.277 20:39:35 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:17.277 20:39:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:17.277 20:39:35 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:17.277 20:39:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:17.277 20:39:35 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:17.277 20:39:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:17.277 20:39:35 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:17.277 20:39:35 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:22:17.277 20:39:35 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:17.277 request: 00:22:17.277 { 00:22:17.277 "uuid": "24f1ff4a-b5f7-41a4-8eed-9a536d9e4256", 00:22:17.278 "method": "bdev_lvol_get_lvstores", 00:22:17.278 "req_id": 1 00:22:17.278 } 00:22:17.278 Got JSON-RPC error response 00:22:17.278 response: 00:22:17.278 { 00:22:17.278 "code": -19, 00:22:17.278 "message": "No such device" 00:22:17.278 } 00:22:17.278 20:39:35 -- common/autotest_common.sh@643 -- # es=1 00:22:17.278 20:39:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:17.278 20:39:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:17.278 20:39:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:17.278 20:39:35 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:17.561 aio_bdev 00:22:17.561 20:39:35 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 205a4db8-6a26-4c82-90b9-da18536e482e 00:22:17.561 20:39:35 -- common/autotest_common.sh@887 -- # local bdev_name=205a4db8-6a26-4c82-90b9-da18536e482e 00:22:17.561 20:39:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:17.561 20:39:35 -- common/autotest_common.sh@889 -- # local i 00:22:17.561 20:39:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:17.561 20:39:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:17.561 20:39:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:17.561 20:39:35 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 205a4db8-6a26-4c82-90b9-da18536e482e -t 2000 00:22:17.821 [ 00:22:17.821 { 00:22:17.821 "name": "205a4db8-6a26-4c82-90b9-da18536e482e", 00:22:17.821 "aliases": [ 00:22:17.821 "lvs/lvol" 00:22:17.821 ], 00:22:17.821 "product_name": "Logical Volume", 00:22:17.821 "block_size": 4096, 00:22:17.821 "num_blocks": 38912, 00:22:17.821 "uuid": "205a4db8-6a26-4c82-90b9-da18536e482e", 00:22:17.821 "assigned_rate_limits": { 00:22:17.821 "rw_ios_per_sec": 0, 00:22:17.821 "rw_mbytes_per_sec": 0, 00:22:17.821 "r_mbytes_per_sec": 0, 00:22:17.821 "w_mbytes_per_sec": 0 00:22:17.821 }, 00:22:17.821 "claimed": false, 00:22:17.821 "zoned": false, 00:22:17.821 "supported_io_types": { 00:22:17.821 "read": true, 00:22:17.821 "write": true, 00:22:17.821 "unmap": true, 00:22:17.821 "write_zeroes": true, 00:22:17.821 "flush": false, 00:22:17.821 "reset": true, 00:22:17.821 "compare": false, 00:22:17.821 "compare_and_write": false, 00:22:17.821 "abort": false, 00:22:17.821 "nvme_admin": false, 00:22:17.821 "nvme_io": false 00:22:17.821 }, 00:22:17.821 "driver_specific": { 00:22:17.821 "lvol": { 00:22:17.821 "lvol_store_uuid": "24f1ff4a-b5f7-41a4-8eed-9a536d9e4256", 00:22:17.821 "base_bdev": "aio_bdev", 00:22:17.821 "thin_provision": false, 00:22:17.821 "snapshot": false, 00:22:17.821 "clone": false, 00:22:17.821 "esnap_clone": false 00:22:17.821 } 00:22:17.821 } 00:22:17.821 } 00:22:17.821 ] 00:22:17.821 20:39:35 -- common/autotest_common.sh@895 -- # return 0 00:22:17.821 20:39:35 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:17.821 20:39:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:22:17.821 20:39:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:22:17.821 20:39:36 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:17.821 20:39:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:22:18.080 20:39:36 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:22:18.080 20:39:36 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 205a4db8-6a26-4c82-90b9-da18536e482e 00:22:18.080 20:39:36 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24f1ff4a-b5f7-41a4-8eed-9a536d9e4256 00:22:18.338 20:39:36 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:18.338 20:39:36 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:18.338 00:22:18.338 real 0m14.651s 00:22:18.338 user 0m14.330s 00:22:18.338 sys 0m1.144s 00:22:18.338 20:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.338 20:39:36 -- common/autotest_common.sh@10 -- # set +x 00:22:18.338 ************************************ 00:22:18.338 END TEST lvs_grow_clean 00:22:18.338 ************************************ 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:22:18.598 20:39:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:18.598 20:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:18.598 20:39:36 -- common/autotest_common.sh@10 -- # set +x 00:22:18.598 ************************************ 00:22:18.598 START TEST lvs_grow_dirty 00:22:18.598 ************************************ 00:22:18.598 20:39:36 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:22:18.598 20:39:36 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:22:18.859 20:39:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:18.859 20:39:36 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:18.859 20:39:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:22:18.859 20:39:37 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:22:18.859 20:39:37 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:22:18.859 20:39:37 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7bfa35c7-f49c-443f-83d6-0f06284e40db lvol 150 00:22:19.120 20:39:37 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e767412c-8767-4654-886b-4e02b96a5df4 00:22:19.120 20:39:37 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:19.120 20:39:37 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:22:19.120 [2024-04-26 20:39:37.363186] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:22:19.120 [2024-04-26 20:39:37.363271] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:22:19.120 true 00:22:19.120 20:39:37 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:19.120 20:39:37 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:22:19.381 20:39:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:22:19.381 20:39:37 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:19.381 20:39:37 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e767412c-8767-4654-886b-4e02b96a5df4 00:22:19.641 20:39:37 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:19.641 20:39:37 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:19.899 20:39:38 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3570577 00:22:19.899 20:39:38 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.899 20:39:38 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3570577 /var/tmp/bdevperf.sock 00:22:19.899 20:39:38 -- common/autotest_common.sh@819 -- # '[' -z 3570577 ']' 00:22:19.899 20:39:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.899 20:39:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.899 20:39:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.899 20:39:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.899 20:39:38 -- common/autotest_common.sh@10 -- # set +x 00:22:19.899 20:39:38 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:22:19.899 [2024-04-26 20:39:38.104302] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:19.899 [2024-04-26 20:39:38.104440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3570577 ] 00:22:19.899 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.899 [2024-04-26 20:39:38.215802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.157 [2024-04-26 20:39:38.311037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.727 20:39:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.727 20:39:38 -- common/autotest_common.sh@852 -- # return 0 00:22:20.727 20:39:38 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:22:21.070 Nvme0n1 00:22:21.070 20:39:39 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:22:21.070 [ 00:22:21.071 { 00:22:21.071 "name": "Nvme0n1", 00:22:21.071 "aliases": [ 00:22:21.071 "e767412c-8767-4654-886b-4e02b96a5df4" 00:22:21.071 ], 00:22:21.071 "product_name": "NVMe disk", 00:22:21.071 "block_size": 4096, 00:22:21.071 "num_blocks": 38912, 00:22:21.071 "uuid": "e767412c-8767-4654-886b-4e02b96a5df4", 00:22:21.071 "assigned_rate_limits": { 00:22:21.071 "rw_ios_per_sec": 0, 00:22:21.071 "rw_mbytes_per_sec": 0, 00:22:21.071 "r_mbytes_per_sec": 0, 00:22:21.071 "w_mbytes_per_sec": 0 00:22:21.071 }, 00:22:21.071 "claimed": false, 00:22:21.071 "zoned": false, 00:22:21.071 "supported_io_types": { 00:22:21.071 "read": true, 00:22:21.071 "write": true, 00:22:21.071 "unmap": true, 00:22:21.071 "write_zeroes": true, 00:22:21.071 "flush": true, 00:22:21.071 "reset": true, 00:22:21.071 "compare": true, 00:22:21.071 "compare_and_write": true, 00:22:21.071 "abort": true, 00:22:21.071 "nvme_admin": true, 00:22:21.071 "nvme_io": true 00:22:21.071 }, 00:22:21.071 "driver_specific": { 00:22:21.071 "nvme": [ 00:22:21.071 { 00:22:21.071 "trid": { 00:22:21.071 "trtype": "TCP", 00:22:21.071 "adrfam": "IPv4", 00:22:21.071 "traddr": "10.0.0.2", 00:22:21.071 "trsvcid": "4420", 00:22:21.071 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:21.071 }, 00:22:21.071 "ctrlr_data": { 00:22:21.071 "cntlid": 1, 00:22:21.071 "vendor_id": "0x8086", 00:22:21.071 "model_number": "SPDK bdev Controller", 00:22:21.071 "serial_number": "SPDK0", 00:22:21.071 "firmware_revision": "24.01.1", 00:22:21.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:21.071 "oacs": { 00:22:21.071 "security": 0, 00:22:21.071 "format": 0, 00:22:21.071 "firmware": 0, 00:22:21.071 "ns_manage": 0 00:22:21.071 }, 00:22:21.071 "multi_ctrlr": true, 00:22:21.071 "ana_reporting": false 00:22:21.071 }, 00:22:21.071 "vs": { 00:22:21.071 "nvme_version": "1.3" 00:22:21.071 }, 00:22:21.071 "ns_data": { 00:22:21.071 "id": 1, 00:22:21.071 "can_share": true 00:22:21.071 } 00:22:21.071 } 00:22:21.071 ], 00:22:21.071 "mp_policy": "active_passive" 00:22:21.071 } 00:22:21.071 } 00:22:21.071 ] 00:22:21.071 20:39:39 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3570848 00:22:21.071 20:39:39 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.071 20:39:39 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:22:21.071 Running I/O for 10 seconds... 00:22:22.007 Latency(us) 00:22:22.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:22.007 Nvme0n1 : 1.00 23870.00 93.24 0.00 0.00 0.00 0.00 0.00 00:22:22.007 =================================================================================================================== 00:22:22.007 Total : 23870.00 93.24 0.00 0.00 0.00 0.00 0.00 00:22:22.007 00:22:22.950 20:39:41 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:23.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:23.210 Nvme0n1 : 2.00 24022.50 93.84 0.00 0.00 0.00 0.00 0.00 00:22:23.210 =================================================================================================================== 00:22:23.210 Total : 24022.50 93.84 0.00 0.00 0.00 0.00 0.00 00:22:23.210 00:22:23.210 true 00:22:23.210 20:39:41 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:23.210 20:39:41 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:23.469 20:39:41 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:23.469 20:39:41 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:23.469 20:39:41 -- target/nvmf_lvs_grow.sh@65 -- # wait 3570848 00:22:24.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:24.035 Nvme0n1 : 3.00 24084.67 94.08 0.00 0.00 0.00 0.00 0.00 00:22:24.035 =================================================================================================================== 00:22:24.035 Total : 24084.67 94.08 0.00 0.00 0.00 0.00 0.00 00:22:24.035 00:22:24.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:24.977 Nvme0n1 : 4.00 24116.00 94.20 0.00 0.00 0.00 0.00 0.00 00:22:24.977 =================================================================================================================== 00:22:24.977 Total : 24116.00 94.20 0.00 0.00 0.00 0.00 0.00 00:22:24.977 00:22:26.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:26.352 Nvme0n1 : 5.00 24143.80 94.31 0.00 0.00 0.00 0.00 0.00 00:22:26.352 =================================================================================================================== 00:22:26.352 Total : 24143.80 94.31 0.00 0.00 0.00 0.00 0.00 00:22:26.352 00:22:27.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:27.293 Nvme0n1 : 6.00 24151.83 94.34 0.00 0.00 0.00 0.00 0.00 00:22:27.293 =================================================================================================================== 00:22:27.293 Total : 24151.83 94.34 0.00 0.00 0.00 0.00 0.00 00:22:27.293 00:22:28.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:28.230 Nvme0n1 : 7.00 24175.71 94.44 0.00 0.00 0.00 0.00 0.00 00:22:28.230 =================================================================================================================== 00:22:28.230 Total : 24175.71 94.44 0.00 0.00 0.00 0.00 0.00 00:22:28.230 00:22:29.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:29.169 Nvme0n1 : 8.00 24161.25 94.38 0.00 0.00 0.00 0.00 0.00 00:22:29.169 =================================================================================================================== 00:22:29.169 Total : 24161.25 94.38 0.00 0.00 0.00 0.00 0.00 00:22:29.170 00:22:30.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:30.107 Nvme0n1 : 9.00 24185.67 94.48 0.00 0.00 0.00 0.00 0.00 00:22:30.107 =================================================================================================================== 00:22:30.107 Total : 24185.67 94.48 0.00 0.00 0.00 0.00 0.00 00:22:30.107 00:22:31.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:31.047 Nvme0n1 : 10.00 24187.90 94.48 0.00 0.00 0.00 0.00 0.00 00:22:31.047 =================================================================================================================== 00:22:31.047 Total : 24187.90 94.48 0.00 0.00 0.00 0.00 0.00 00:22:31.047 00:22:31.047 00:22:31.047 Latency(us) 00:22:31.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:31.047 Nvme0n1 : 10.00 24188.98 94.49 0.00 0.00 5288.44 1612.53 11313.58 00:22:31.047 =================================================================================================================== 00:22:31.047 Total : 24188.98 94.49 0.00 0.00 5288.44 1612.53 11313.58 00:22:31.047 0 00:22:31.047 20:39:49 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3570577 00:22:31.047 20:39:49 -- common/autotest_common.sh@926 -- # '[' -z 3570577 ']' 00:22:31.047 20:39:49 -- common/autotest_common.sh@930 -- # kill -0 3570577 00:22:31.047 20:39:49 -- common/autotest_common.sh@931 -- # uname 00:22:31.047 20:39:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:31.047 20:39:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3570577 00:22:31.047 20:39:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:31.047 20:39:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:31.047 20:39:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3570577' 00:22:31.047 killing process with pid 3570577 00:22:31.047 20:39:49 -- common/autotest_common.sh@945 -- # kill 3570577 00:22:31.047 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.047 00:22:31.047 Latency(us) 00:22:31.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.047 =================================================================================================================== 00:22:31.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.047 20:39:49 -- common/autotest_common.sh@950 -- # wait 3570577 00:22:31.614 20:39:49 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:31.614 20:39:49 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:31.614 20:39:49 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:22:31.873 20:39:50 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:22:31.873 20:39:50 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:22:31.873 20:39:50 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3567160 00:22:31.873 20:39:50 -- target/nvmf_lvs_grow.sh@74 -- # wait 3567160 00:22:31.873 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3567160 Killed "${NVMF_APP[@]}" "$@" 00:22:31.873 20:39:50 -- target/nvmf_lvs_grow.sh@74 -- # true 00:22:31.873 20:39:50 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:22:31.873 20:39:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:31.873 20:39:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:31.873 20:39:50 -- common/autotest_common.sh@10 -- # set +x 00:22:31.873 20:39:50 -- nvmf/common.sh@469 -- # nvmfpid=3572929 00:22:31.873 20:39:50 -- nvmf/common.sh@470 -- # waitforlisten 3572929 00:22:31.873 20:39:50 -- common/autotest_common.sh@819 -- # '[' -z 3572929 ']' 00:22:31.873 20:39:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.873 20:39:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:31.873 20:39:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.873 20:39:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:31.873 20:39:50 -- common/autotest_common.sh@10 -- # set +x 00:22:31.874 20:39:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:31.874 [2024-04-26 20:39:50.140830] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:31.874 [2024-04-26 20:39:50.140909] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.874 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.134 [2024-04-26 20:39:50.239360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.134 [2024-04-26 20:39:50.332982] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:32.134 [2024-04-26 20:39:50.333157] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.134 [2024-04-26 20:39:50.333169] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.134 [2024-04-26 20:39:50.333178] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.134 [2024-04-26 20:39:50.333209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.704 20:39:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:32.704 20:39:50 -- common/autotest_common.sh@852 -- # return 0 00:22:32.704 20:39:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:32.704 20:39:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:32.704 20:39:50 -- common/autotest_common.sh@10 -- # set +x 00:22:32.704 20:39:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.704 20:39:50 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:32.961 [2024-04-26 20:39:51.053663] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:32.961 [2024-04-26 20:39:51.053792] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:32.961 [2024-04-26 20:39:51.053820] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:32.961 20:39:51 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:22:32.961 20:39:51 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev e767412c-8767-4654-886b-4e02b96a5df4 00:22:32.961 20:39:51 -- common/autotest_common.sh@887 -- # local bdev_name=e767412c-8767-4654-886b-4e02b96a5df4 00:22:32.961 20:39:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:32.961 20:39:51 -- common/autotest_common.sh@889 -- # local i 00:22:32.961 20:39:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:32.961 20:39:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:32.961 20:39:51 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:32.961 20:39:51 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e767412c-8767-4654-886b-4e02b96a5df4 -t 2000 00:22:33.219 [ 00:22:33.219 { 00:22:33.219 "name": "e767412c-8767-4654-886b-4e02b96a5df4", 00:22:33.219 "aliases": [ 00:22:33.219 "lvs/lvol" 00:22:33.219 ], 00:22:33.219 "product_name": "Logical Volume", 00:22:33.219 "block_size": 4096, 00:22:33.219 "num_blocks": 38912, 00:22:33.219 "uuid": "e767412c-8767-4654-886b-4e02b96a5df4", 00:22:33.219 "assigned_rate_limits": { 00:22:33.219 "rw_ios_per_sec": 0, 00:22:33.219 "rw_mbytes_per_sec": 0, 00:22:33.219 "r_mbytes_per_sec": 0, 00:22:33.219 "w_mbytes_per_sec": 0 00:22:33.219 }, 00:22:33.219 "claimed": false, 00:22:33.219 "zoned": false, 00:22:33.219 "supported_io_types": { 00:22:33.219 "read": true, 00:22:33.219 "write": true, 00:22:33.219 "unmap": true, 00:22:33.219 "write_zeroes": true, 00:22:33.219 "flush": false, 00:22:33.219 "reset": true, 00:22:33.219 "compare": false, 00:22:33.219 "compare_and_write": false, 00:22:33.219 "abort": false, 00:22:33.219 "nvme_admin": false, 00:22:33.219 "nvme_io": false 00:22:33.219 }, 00:22:33.219 "driver_specific": { 00:22:33.219 "lvol": { 00:22:33.219 "lvol_store_uuid": "7bfa35c7-f49c-443f-83d6-0f06284e40db", 00:22:33.219 "base_bdev": "aio_bdev", 00:22:33.219 "thin_provision": false, 00:22:33.219 "snapshot": false, 00:22:33.219 "clone": false, 00:22:33.219 "esnap_clone": false 00:22:33.219 } 00:22:33.219 } 00:22:33.219 } 00:22:33.219 ] 00:22:33.219 20:39:51 -- common/autotest_common.sh@895 -- # return 0 00:22:33.219 20:39:51 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:33.219 20:39:51 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:22:33.219 20:39:51 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:22:33.219 20:39:51 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:33.219 20:39:51 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:22:33.478 20:39:51 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:22:33.478 20:39:51 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:33.478 [2024-04-26 20:39:51.728075] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:22:33.478 20:39:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:33.478 20:39:51 -- common/autotest_common.sh@640 -- # local es=0 00:22:33.478 20:39:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:33.478 20:39:51 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:33.478 20:39:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.478 20:39:51 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:33.478 20:39:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.478 20:39:51 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:33.478 20:39:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.478 20:39:51 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:33.478 20:39:51 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:22:33.478 20:39:51 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:33.738 request: 00:22:33.738 { 00:22:33.738 "uuid": "7bfa35c7-f49c-443f-83d6-0f06284e40db", 00:22:33.738 "method": "bdev_lvol_get_lvstores", 00:22:33.738 "req_id": 1 00:22:33.738 } 00:22:33.738 Got JSON-RPC error response 00:22:33.738 response: 00:22:33.738 { 00:22:33.738 "code": -19, 00:22:33.738 "message": "No such device" 00:22:33.738 } 00:22:33.738 20:39:51 -- common/autotest_common.sh@643 -- # es=1 00:22:33.738 20:39:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:33.738 20:39:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:33.738 20:39:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:33.738 20:39:51 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:33.738 aio_bdev 00:22:33.738 20:39:52 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e767412c-8767-4654-886b-4e02b96a5df4 00:22:33.738 20:39:52 -- common/autotest_common.sh@887 -- # local bdev_name=e767412c-8767-4654-886b-4e02b96a5df4 00:22:33.738 20:39:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:33.738 20:39:52 -- common/autotest_common.sh@889 -- # local i 00:22:33.738 20:39:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:33.738 20:39:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:33.738 20:39:52 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:33.998 20:39:52 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e767412c-8767-4654-886b-4e02b96a5df4 -t 2000 00:22:33.998 [ 00:22:33.998 { 00:22:33.998 "name": "e767412c-8767-4654-886b-4e02b96a5df4", 00:22:33.998 "aliases": [ 00:22:33.998 "lvs/lvol" 00:22:33.998 ], 00:22:33.998 "product_name": "Logical Volume", 00:22:33.998 "block_size": 4096, 00:22:33.998 "num_blocks": 38912, 00:22:33.998 "uuid": "e767412c-8767-4654-886b-4e02b96a5df4", 00:22:33.998 "assigned_rate_limits": { 00:22:33.998 "rw_ios_per_sec": 0, 00:22:33.998 "rw_mbytes_per_sec": 0, 00:22:33.998 "r_mbytes_per_sec": 0, 00:22:33.998 "w_mbytes_per_sec": 0 00:22:33.998 }, 00:22:33.998 "claimed": false, 00:22:33.998 "zoned": false, 00:22:33.998 "supported_io_types": { 00:22:33.998 "read": true, 00:22:33.998 "write": true, 00:22:33.998 "unmap": true, 00:22:33.998 "write_zeroes": true, 00:22:33.998 "flush": false, 00:22:33.998 "reset": true, 00:22:33.998 "compare": false, 00:22:33.998 "compare_and_write": false, 00:22:33.998 "abort": false, 00:22:33.998 "nvme_admin": false, 00:22:33.998 "nvme_io": false 00:22:33.998 }, 00:22:33.998 "driver_specific": { 00:22:33.998 "lvol": { 00:22:33.998 "lvol_store_uuid": "7bfa35c7-f49c-443f-83d6-0f06284e40db", 00:22:33.998 "base_bdev": "aio_bdev", 00:22:33.998 "thin_provision": false, 00:22:33.998 "snapshot": false, 00:22:33.998 "clone": false, 00:22:33.998 "esnap_clone": false 00:22:33.998 } 00:22:33.998 } 00:22:33.998 } 00:22:33.998 ] 00:22:33.998 20:39:52 -- common/autotest_common.sh@895 -- # return 0 00:22:33.998 20:39:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:22:33.998 20:39:52 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:34.259 20:39:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:22:34.259 20:39:52 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:34.259 20:39:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:22:34.259 20:39:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:22:34.259 20:39:52 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e767412c-8767-4654-886b-4e02b96a5df4 00:22:34.518 20:39:52 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7bfa35c7-f49c-443f-83d6-0f06284e40db 00:22:34.776 20:39:52 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:34.776 20:39:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:34.776 00:22:34.776 real 0m16.335s 00:22:34.776 user 0m42.288s 00:22:34.776 sys 0m3.070s 00:22:34.776 20:39:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.776 20:39:53 -- common/autotest_common.sh@10 -- # set +x 00:22:34.776 ************************************ 00:22:34.776 END TEST lvs_grow_dirty 00:22:34.776 ************************************ 00:22:34.776 20:39:53 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:22:34.776 20:39:53 -- common/autotest_common.sh@796 -- # type=--id 00:22:34.776 20:39:53 -- common/autotest_common.sh@797 -- # id=0 00:22:34.776 20:39:53 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:34.776 20:39:53 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:34.776 20:39:53 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:34.776 20:39:53 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:34.776 20:39:53 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:34.776 20:39:53 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:34.776 nvmf_trace.0 00:22:34.776 20:39:53 -- common/autotest_common.sh@811 -- # return 0 00:22:34.776 20:39:53 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:22:34.776 20:39:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:34.776 20:39:53 -- nvmf/common.sh@116 -- # sync 00:22:34.776 20:39:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:34.776 20:39:53 -- nvmf/common.sh@119 -- # set +e 00:22:34.776 20:39:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:34.776 20:39:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:34.776 rmmod nvme_tcp 00:22:34.776 rmmod nvme_fabrics 00:22:35.034 rmmod nvme_keyring 00:22:35.034 20:39:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:35.034 20:39:53 -- nvmf/common.sh@123 -- # set -e 00:22:35.034 20:39:53 -- nvmf/common.sh@124 -- # return 0 00:22:35.034 20:39:53 -- nvmf/common.sh@477 -- # '[' -n 3572929 ']' 00:22:35.034 20:39:53 -- nvmf/common.sh@478 -- # killprocess 3572929 00:22:35.034 20:39:53 -- common/autotest_common.sh@926 -- # '[' -z 3572929 ']' 00:22:35.034 20:39:53 -- common/autotest_common.sh@930 -- # kill -0 3572929 00:22:35.034 20:39:53 -- common/autotest_common.sh@931 -- # uname 00:22:35.034 20:39:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:35.035 20:39:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3572929 00:22:35.035 20:39:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:35.035 20:39:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:35.035 20:39:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3572929' 00:22:35.035 killing process with pid 3572929 00:22:35.035 20:39:53 -- common/autotest_common.sh@945 -- # kill 3572929 00:22:35.035 20:39:53 -- common/autotest_common.sh@950 -- # wait 3572929 00:22:35.292 20:39:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:35.292 20:39:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:35.292 20:39:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:35.292 20:39:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.292 20:39:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:35.292 20:39:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.292 20:39:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.292 20:39:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.828 20:39:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:37.828 00:22:37.828 real 0m41.555s 00:22:37.828 user 1m2.065s 00:22:37.828 sys 0m9.756s 00:22:37.828 20:39:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.828 20:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:37.828 ************************************ 00:22:37.828 END TEST nvmf_lvs_grow 00:22:37.828 ************************************ 00:22:37.828 20:39:55 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:37.828 20:39:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:37.828 20:39:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.828 20:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:37.828 ************************************ 00:22:37.828 START TEST nvmf_bdev_io_wait 00:22:37.828 ************************************ 00:22:37.828 20:39:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:37.828 * Looking for test storage... 00:22:37.828 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:37.828 20:39:55 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.828 20:39:55 -- nvmf/common.sh@7 -- # uname -s 00:22:37.828 20:39:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.828 20:39:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.828 20:39:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.828 20:39:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.828 20:39:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.828 20:39:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.828 20:39:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.828 20:39:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.828 20:39:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.828 20:39:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.828 20:39:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:22:37.828 20:39:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:22:37.828 20:39:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.828 20:39:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.828 20:39:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:37.828 20:39:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:37.828 20:39:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.828 20:39:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.828 20:39:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.828 20:39:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.828 20:39:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.828 20:39:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.828 20:39:55 -- paths/export.sh@5 -- # export PATH 00:22:37.829 20:39:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.829 20:39:55 -- nvmf/common.sh@46 -- # : 0 00:22:37.829 20:39:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.829 20:39:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.829 20:39:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.829 20:39:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.829 20:39:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.829 20:39:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.829 20:39:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.829 20:39:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.829 20:39:55 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.829 20:39:55 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.829 20:39:55 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:22:37.829 20:39:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:37.829 20:39:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.829 20:39:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.829 20:39:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.829 20:39:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.829 20:39:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.829 20:39:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.829 20:39:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.829 20:39:55 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:37.829 20:39:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:37.829 20:39:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:37.829 20:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:44.411 20:40:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:44.411 20:40:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:44.411 20:40:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:44.412 20:40:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:44.412 20:40:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:44.412 20:40:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:44.412 20:40:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:44.412 20:40:02 -- nvmf/common.sh@294 -- # net_devs=() 00:22:44.412 20:40:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:44.412 20:40:02 -- nvmf/common.sh@295 -- # e810=() 00:22:44.412 20:40:02 -- nvmf/common.sh@295 -- # local -ga e810 00:22:44.412 20:40:02 -- nvmf/common.sh@296 -- # x722=() 00:22:44.412 20:40:02 -- nvmf/common.sh@296 -- # local -ga x722 00:22:44.412 20:40:02 -- nvmf/common.sh@297 -- # mlx=() 00:22:44.412 20:40:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:44.412 20:40:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.412 20:40:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:44.412 20:40:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:44.412 20:40:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:44.412 20:40:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:44.412 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:44.412 20:40:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:44.412 20:40:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:44.412 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:44.412 20:40:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:44.412 20:40:02 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:44.412 20:40:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.412 20:40:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:44.412 20:40:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.412 20:40:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:44.412 Found net devices under 0000:27:00.0: cvl_0_0 00:22:44.412 20:40:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.412 20:40:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:44.412 20:40:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.412 20:40:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:44.412 20:40:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.412 20:40:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:44.412 Found net devices under 0000:27:00.1: cvl_0_1 00:22:44.412 20:40:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.412 20:40:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:44.412 20:40:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:44.412 20:40:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:44.412 20:40:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.412 20:40:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.412 20:40:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.412 20:40:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:44.412 20:40:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.412 20:40:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.412 20:40:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:44.412 20:40:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.412 20:40:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.412 20:40:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:44.412 20:40:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:44.412 20:40:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.412 20:40:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.412 20:40:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.412 20:40:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.412 20:40:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:44.412 20:40:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.412 20:40:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.412 20:40:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.412 20:40:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:44.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:22:44.412 00:22:44.412 --- 10.0.0.2 ping statistics --- 00:22:44.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.412 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:44.412 20:40:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:22:44.412 00:22:44.412 --- 10.0.0.1 ping statistics --- 00:22:44.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.412 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:22:44.412 20:40:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.412 20:40:02 -- nvmf/common.sh@410 -- # return 0 00:22:44.412 20:40:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:44.412 20:40:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.412 20:40:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:44.412 20:40:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.412 20:40:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:44.412 20:40:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:44.412 20:40:02 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:44.412 20:40:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:44.412 20:40:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:44.412 20:40:02 -- common/autotest_common.sh@10 -- # set +x 00:22:44.412 20:40:02 -- nvmf/common.sh@469 -- # nvmfpid=3577869 00:22:44.412 20:40:02 -- nvmf/common.sh@470 -- # waitforlisten 3577869 00:22:44.412 20:40:02 -- common/autotest_common.sh@819 -- # '[' -z 3577869 ']' 00:22:44.412 20:40:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.412 20:40:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:44.412 20:40:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.412 20:40:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:44.412 20:40:02 -- common/autotest_common.sh@10 -- # set +x 00:22:44.412 20:40:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:44.412 [2024-04-26 20:40:02.588211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:44.412 [2024-04-26 20:40:02.588340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.412 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.412 [2024-04-26 20:40:02.729543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.673 [2024-04-26 20:40:02.825013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:44.673 [2024-04-26 20:40:02.825210] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.673 [2024-04-26 20:40:02.825224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.673 [2024-04-26 20:40:02.825234] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.673 [2024-04-26 20:40:02.825409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.673 [2024-04-26 20:40:02.825510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.673 [2024-04-26 20:40:02.825617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.673 [2024-04-26 20:40:02.825627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.245 20:40:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:45.245 20:40:03 -- common/autotest_common.sh@852 -- # return 0 00:22:45.245 20:40:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:45.245 20:40:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:45.245 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:22:45.245 20:40:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:22:45.245 20:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.245 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:22:45.245 20:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:22:45.245 20:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.245 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:22:45.245 20:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.245 20:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.245 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:22:45.245 [2024-04-26 20:40:03.457048] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.245 20:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.245 20:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.245 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:22:45.245 Malloc0 00:22:45.245 20:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.245 20:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.245 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:22:45.245 20:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.245 20:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.245 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:22:45.245 20:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.245 20:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.245 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:22:45.245 [2024-04-26 20:40:03.540540] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.245 20:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3577995 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@30 -- # READ_PID=3577997 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3577999 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:22:45.245 20:40:03 -- nvmf/common.sh@520 -- # config=() 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3578001 00:22:45.245 20:40:03 -- nvmf/common.sh@520 -- # local subsystem config 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@35 -- # sync 00:22:45.245 20:40:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.245 20:40:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.245 { 00:22:45.245 "params": { 00:22:45.245 "name": "Nvme$subsystem", 00:22:45.245 "trtype": "$TEST_TRANSPORT", 00:22:45.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.245 "adrfam": "ipv4", 00:22:45.245 "trsvcid": "$NVMF_PORT", 00:22:45.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.245 "hdgst": ${hdgst:-false}, 00:22:45.245 "ddgst": ${ddgst:-false} 00:22:45.245 }, 00:22:45.245 "method": "bdev_nvme_attach_controller" 00:22:45.245 } 00:22:45.245 EOF 00:22:45.245 )") 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:22:45.245 20:40:03 -- nvmf/common.sh@520 -- # config=() 00:22:45.245 20:40:03 -- nvmf/common.sh@520 -- # local subsystem config 00:22:45.245 20:40:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.245 20:40:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.245 { 00:22:45.245 "params": { 00:22:45.245 "name": "Nvme$subsystem", 00:22:45.245 "trtype": "$TEST_TRANSPORT", 00:22:45.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.245 "adrfam": "ipv4", 00:22:45.245 "trsvcid": "$NVMF_PORT", 00:22:45.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.245 "hdgst": ${hdgst:-false}, 00:22:45.245 "ddgst": ${ddgst:-false} 00:22:45.245 }, 00:22:45.245 "method": "bdev_nvme_attach_controller" 00:22:45.245 } 00:22:45.245 EOF 00:22:45.245 )") 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:22:45.245 20:40:03 -- nvmf/common.sh@520 -- # config=() 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:22:45.245 20:40:03 -- nvmf/common.sh@520 -- # local subsystem config 00:22:45.245 20:40:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.245 20:40:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.245 { 00:22:45.245 "params": { 00:22:45.245 "name": "Nvme$subsystem", 00:22:45.245 "trtype": "$TEST_TRANSPORT", 00:22:45.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.245 "adrfam": "ipv4", 00:22:45.245 "trsvcid": "$NVMF_PORT", 00:22:45.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.245 "hdgst": ${hdgst:-false}, 00:22:45.245 "ddgst": ${ddgst:-false} 00:22:45.245 }, 00:22:45.245 "method": "bdev_nvme_attach_controller" 00:22:45.245 } 00:22:45.245 EOF 00:22:45.245 )") 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:22:45.245 20:40:03 -- nvmf/common.sh@520 -- # config=() 00:22:45.245 20:40:03 -- target/bdev_io_wait.sh@37 -- # wait 3577995 00:22:45.245 20:40:03 -- nvmf/common.sh@542 -- # cat 00:22:45.245 20:40:03 -- nvmf/common.sh@520 -- # local subsystem config 00:22:45.245 20:40:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.245 20:40:03 -- nvmf/common.sh@542 -- # cat 00:22:45.245 20:40:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.245 { 00:22:45.245 "params": { 00:22:45.245 "name": "Nvme$subsystem", 00:22:45.245 "trtype": "$TEST_TRANSPORT", 00:22:45.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.245 "adrfam": "ipv4", 00:22:45.245 "trsvcid": "$NVMF_PORT", 00:22:45.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.245 "hdgst": ${hdgst:-false}, 00:22:45.246 "ddgst": ${ddgst:-false} 00:22:45.246 }, 00:22:45.246 "method": "bdev_nvme_attach_controller" 00:22:45.246 } 00:22:45.246 EOF 00:22:45.246 )") 00:22:45.246 20:40:03 -- nvmf/common.sh@542 -- # cat 00:22:45.246 20:40:03 -- nvmf/common.sh@542 -- # cat 00:22:45.246 20:40:03 -- nvmf/common.sh@544 -- # jq . 00:22:45.246 20:40:03 -- nvmf/common.sh@544 -- # jq . 00:22:45.246 20:40:03 -- nvmf/common.sh@544 -- # jq . 00:22:45.246 20:40:03 -- nvmf/common.sh@544 -- # jq . 00:22:45.246 20:40:03 -- nvmf/common.sh@545 -- # IFS=, 00:22:45.246 20:40:03 -- nvmf/common.sh@545 -- # IFS=, 00:22:45.246 20:40:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:45.246 "params": { 00:22:45.246 "name": "Nvme1", 00:22:45.246 "trtype": "tcp", 00:22:45.246 "traddr": "10.0.0.2", 00:22:45.246 "adrfam": "ipv4", 00:22:45.246 "trsvcid": "4420", 00:22:45.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.246 "hdgst": false, 00:22:45.246 "ddgst": false 00:22:45.246 }, 00:22:45.246 "method": "bdev_nvme_attach_controller" 00:22:45.246 }' 00:22:45.246 20:40:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:45.246 "params": { 00:22:45.246 "name": "Nvme1", 00:22:45.246 "trtype": "tcp", 00:22:45.246 "traddr": "10.0.0.2", 00:22:45.246 "adrfam": "ipv4", 00:22:45.246 "trsvcid": "4420", 00:22:45.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.246 "hdgst": false, 00:22:45.246 "ddgst": false 00:22:45.246 }, 00:22:45.246 "method": "bdev_nvme_attach_controller" 00:22:45.246 }' 00:22:45.246 20:40:03 -- nvmf/common.sh@545 -- # IFS=, 00:22:45.246 20:40:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:45.246 "params": { 00:22:45.246 "name": "Nvme1", 00:22:45.246 "trtype": "tcp", 00:22:45.246 "traddr": "10.0.0.2", 00:22:45.246 "adrfam": "ipv4", 00:22:45.246 "trsvcid": "4420", 00:22:45.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.246 "hdgst": false, 00:22:45.246 "ddgst": false 00:22:45.246 }, 00:22:45.246 "method": "bdev_nvme_attach_controller" 00:22:45.246 }' 00:22:45.246 20:40:03 -- nvmf/common.sh@545 -- # IFS=, 00:22:45.246 20:40:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:45.246 "params": { 00:22:45.246 "name": "Nvme1", 00:22:45.246 "trtype": "tcp", 00:22:45.246 "traddr": "10.0.0.2", 00:22:45.246 "adrfam": "ipv4", 00:22:45.246 "trsvcid": "4420", 00:22:45.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.246 "hdgst": false, 00:22:45.246 "ddgst": false 00:22:45.246 }, 00:22:45.246 "method": "bdev_nvme_attach_controller" 00:22:45.246 }' 00:22:45.507 [2024-04-26 20:40:03.600763] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:45.507 [2024-04-26 20:40:03.600763] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:45.507 [2024-04-26 20:40:03.600853] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:22:45.507 [2024-04-26 20:40:03.600854] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:22:45.507 [2024-04-26 20:40:03.614514] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:45.507 [2024-04-26 20:40:03.614620] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:22:45.507 [2024-04-26 20:40:03.623983] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:45.507 [2024-04-26 20:40:03.624123] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:45.507 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.507 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.507 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.507 [2024-04-26 20:40:03.795027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.507 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.507 [2024-04-26 20:40:03.822121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.769 [2024-04-26 20:40:03.867227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.769 [2024-04-26 20:40:03.920012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:45.769 [2024-04-26 20:40:03.957539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:45.769 [2024-04-26 20:40:03.969819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.769 [2024-04-26 20:40:04.001069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:45.769 [2024-04-26 20:40:04.105509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:46.030 Running I/O for 1 seconds... 00:22:46.030 Running I/O for 1 seconds... 00:22:46.030 Running I/O for 1 seconds... 00:22:46.292 Running I/O for 1 seconds... 00:22:47.234 00:22:47.234 Latency(us) 00:22:47.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.234 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:22:47.234 Nvme1n1 : 1.00 168704.43 659.00 0.00 0.00 755.80 229.59 1215.87 00:22:47.234 =================================================================================================================== 00:22:47.234 Total : 168704.43 659.00 0.00 0.00 755.80 229.59 1215.87 00:22:47.234 00:22:47.234 Latency(us) 00:22:47.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.234 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:22:47.234 Nvme1n1 : 1.01 9026.18 35.26 0.00 0.00 14086.21 3621.73 21523.40 00:22:47.234 =================================================================================================================== 00:22:47.234 Total : 9026.18 35.26 0.00 0.00 14086.21 3621.73 21523.40 00:22:47.234 00:22:47.234 Latency(us) 00:22:47.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.234 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:22:47.234 Nvme1n1 : 1.00 16093.20 62.86 0.00 0.00 7930.83 4328.83 16832.40 00:22:47.234 =================================================================================================================== 00:22:47.234 Total : 16093.20 62.86 0.00 0.00 7930.83 4328.83 16832.40 00:22:47.234 00:22:47.234 Latency(us) 00:22:47.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.234 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:22:47.234 Nvme1n1 : 1.00 10483.53 40.95 0.00 0.00 12181.60 3311.29 26214.40 00:22:47.234 =================================================================================================================== 00:22:47.234 Total : 10483.53 40.95 0.00 0.00 12181.60 3311.29 26214.40 00:22:47.807 20:40:06 -- target/bdev_io_wait.sh@38 -- # wait 3577997 00:22:47.807 20:40:06 -- target/bdev_io_wait.sh@39 -- # wait 3577999 00:22:47.807 20:40:06 -- target/bdev_io_wait.sh@40 -- # wait 3578001 00:22:47.807 20:40:06 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.807 20:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.807 20:40:06 -- common/autotest_common.sh@10 -- # set +x 00:22:47.807 20:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.807 20:40:06 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:22:47.807 20:40:06 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:22:47.807 20:40:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:47.807 20:40:06 -- nvmf/common.sh@116 -- # sync 00:22:47.807 20:40:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:47.807 20:40:06 -- nvmf/common.sh@119 -- # set +e 00:22:47.807 20:40:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:47.807 20:40:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:47.807 rmmod nvme_tcp 00:22:47.807 rmmod nvme_fabrics 00:22:47.807 rmmod nvme_keyring 00:22:47.807 20:40:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:47.807 20:40:06 -- nvmf/common.sh@123 -- # set -e 00:22:47.807 20:40:06 -- nvmf/common.sh@124 -- # return 0 00:22:47.807 20:40:06 -- nvmf/common.sh@477 -- # '[' -n 3577869 ']' 00:22:47.807 20:40:06 -- nvmf/common.sh@478 -- # killprocess 3577869 00:22:47.807 20:40:06 -- common/autotest_common.sh@926 -- # '[' -z 3577869 ']' 00:22:47.807 20:40:06 -- common/autotest_common.sh@930 -- # kill -0 3577869 00:22:47.807 20:40:06 -- common/autotest_common.sh@931 -- # uname 00:22:47.807 20:40:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:47.807 20:40:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3577869 00:22:47.807 20:40:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:47.807 20:40:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:47.807 20:40:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3577869' 00:22:47.807 killing process with pid 3577869 00:22:47.807 20:40:06 -- common/autotest_common.sh@945 -- # kill 3577869 00:22:47.807 20:40:06 -- common/autotest_common.sh@950 -- # wait 3577869 00:22:48.376 20:40:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:48.376 20:40:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:48.376 20:40:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:48.376 20:40:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.376 20:40:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:48.376 20:40:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.376 20:40:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.376 20:40:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.912 20:40:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:50.912 00:22:50.912 real 0m12.937s 00:22:50.912 user 0m23.280s 00:22:50.912 sys 0m6.981s 00:22:50.912 20:40:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.912 20:40:08 -- common/autotest_common.sh@10 -- # set +x 00:22:50.912 ************************************ 00:22:50.912 END TEST nvmf_bdev_io_wait 00:22:50.912 ************************************ 00:22:50.912 20:40:08 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:50.912 20:40:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:50.912 20:40:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:50.912 20:40:08 -- common/autotest_common.sh@10 -- # set +x 00:22:50.912 ************************************ 00:22:50.912 START TEST nvmf_queue_depth 00:22:50.912 ************************************ 00:22:50.912 20:40:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:50.912 * Looking for test storage... 00:22:50.912 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:50.912 20:40:08 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.912 20:40:08 -- nvmf/common.sh@7 -- # uname -s 00:22:50.912 20:40:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.912 20:40:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.912 20:40:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.912 20:40:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.912 20:40:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.912 20:40:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.912 20:40:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.912 20:40:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.912 20:40:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.912 20:40:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.912 20:40:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:22:50.912 20:40:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:22:50.912 20:40:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.912 20:40:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.912 20:40:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:50.912 20:40:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:50.912 20:40:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.912 20:40:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.912 20:40:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.912 20:40:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.912 20:40:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.912 20:40:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.912 20:40:08 -- paths/export.sh@5 -- # export PATH 00:22:50.912 20:40:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.912 20:40:08 -- nvmf/common.sh@46 -- # : 0 00:22:50.912 20:40:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:50.912 20:40:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:50.912 20:40:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:50.912 20:40:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.912 20:40:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.912 20:40:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:50.912 20:40:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:50.912 20:40:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:50.912 20:40:08 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:22:50.912 20:40:08 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:22:50.913 20:40:08 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.913 20:40:08 -- target/queue_depth.sh@19 -- # nvmftestinit 00:22:50.913 20:40:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:50.913 20:40:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.913 20:40:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:50.913 20:40:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:50.913 20:40:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:50.913 20:40:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.913 20:40:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.913 20:40:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.913 20:40:08 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:50.913 20:40:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:50.913 20:40:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:50.913 20:40:08 -- common/autotest_common.sh@10 -- # set +x 00:22:56.250 20:40:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:56.250 20:40:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:56.250 20:40:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:56.250 20:40:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:56.250 20:40:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:56.250 20:40:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:56.250 20:40:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:56.250 20:40:14 -- nvmf/common.sh@294 -- # net_devs=() 00:22:56.250 20:40:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:56.250 20:40:14 -- nvmf/common.sh@295 -- # e810=() 00:22:56.250 20:40:14 -- nvmf/common.sh@295 -- # local -ga e810 00:22:56.250 20:40:14 -- nvmf/common.sh@296 -- # x722=() 00:22:56.250 20:40:14 -- nvmf/common.sh@296 -- # local -ga x722 00:22:56.250 20:40:14 -- nvmf/common.sh@297 -- # mlx=() 00:22:56.250 20:40:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:56.250 20:40:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.250 20:40:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:56.250 20:40:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:56.250 20:40:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:56.250 20:40:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:56.250 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:56.250 20:40:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:56.250 20:40:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:56.250 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:56.250 20:40:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:56.250 20:40:14 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:56.250 20:40:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.250 20:40:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:56.250 20:40:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.250 20:40:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:56.250 Found net devices under 0000:27:00.0: cvl_0_0 00:22:56.250 20:40:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.250 20:40:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:56.250 20:40:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.250 20:40:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:56.250 20:40:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.250 20:40:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:56.250 Found net devices under 0000:27:00.1: cvl_0_1 00:22:56.250 20:40:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.250 20:40:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:56.250 20:40:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:56.250 20:40:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:56.250 20:40:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.250 20:40:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.250 20:40:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.250 20:40:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:56.250 20:40:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.250 20:40:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.250 20:40:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:56.250 20:40:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.250 20:40:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.250 20:40:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:56.250 20:40:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:56.250 20:40:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.250 20:40:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.250 20:40:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.250 20:40:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.250 20:40:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:56.250 20:40:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.250 20:40:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.250 20:40:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.250 20:40:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:56.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:22:56.250 00:22:56.250 --- 10.0.0.2 ping statistics --- 00:22:56.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.250 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:22:56.250 20:40:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:22:56.250 00:22:56.250 --- 10.0.0.1 ping statistics --- 00:22:56.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.250 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:56.250 20:40:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.250 20:40:14 -- nvmf/common.sh@410 -- # return 0 00:22:56.250 20:40:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:56.250 20:40:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.250 20:40:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:56.250 20:40:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.250 20:40:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:56.250 20:40:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:56.250 20:40:14 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:22:56.250 20:40:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:56.250 20:40:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:56.250 20:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:56.250 20:40:14 -- nvmf/common.sh@469 -- # nvmfpid=3582641 00:22:56.250 20:40:14 -- nvmf/common.sh@470 -- # waitforlisten 3582641 00:22:56.250 20:40:14 -- common/autotest_common.sh@819 -- # '[' -z 3582641 ']' 00:22:56.250 20:40:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.250 20:40:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:56.250 20:40:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.250 20:40:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:56.250 20:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:56.250 20:40:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:56.250 [2024-04-26 20:40:14.526131] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:56.250 [2024-04-26 20:40:14.526263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.510 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.510 [2024-04-26 20:40:14.667005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.510 [2024-04-26 20:40:14.763732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:56.510 [2024-04-26 20:40:14.763943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.510 [2024-04-26 20:40:14.763958] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.510 [2024-04-26 20:40:14.763968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.510 [2024-04-26 20:40:14.764000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.081 20:40:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:57.082 20:40:15 -- common/autotest_common.sh@852 -- # return 0 00:22:57.082 20:40:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:57.082 20:40:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:57.082 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.082 20:40:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.082 20:40:15 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.082 20:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.082 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.082 [2024-04-26 20:40:15.279822] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.082 20:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.082 20:40:15 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:57.082 20:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.082 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.082 Malloc0 00:22:57.082 20:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.082 20:40:15 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.082 20:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.082 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.082 20:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.082 20:40:15 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.082 20:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.082 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.082 20:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.082 20:40:15 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.082 20:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.082 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.082 [2024-04-26 20:40:15.362200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.082 20:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.082 20:40:15 -- target/queue_depth.sh@30 -- # bdevperf_pid=3582753 00:22:57.082 20:40:15 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.082 20:40:15 -- target/queue_depth.sh@33 -- # waitforlisten 3582753 /var/tmp/bdevperf.sock 00:22:57.082 20:40:15 -- common/autotest_common.sh@819 -- # '[' -z 3582753 ']' 00:22:57.082 20:40:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.082 20:40:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:57.082 20:40:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.082 20:40:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:57.082 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.082 20:40:15 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:22:57.340 [2024-04-26 20:40:15.438820] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:57.340 [2024-04-26 20:40:15.438935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582753 ] 00:22:57.340 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.340 [2024-04-26 20:40:15.556255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.340 [2024-04-26 20:40:15.650609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.906 20:40:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:57.906 20:40:16 -- common/autotest_common.sh@852 -- # return 0 00:22:57.906 20:40:16 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.906 20:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.906 20:40:16 -- common/autotest_common.sh@10 -- # set +x 00:22:58.165 NVMe0n1 00:22:58.165 20:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.165 20:40:16 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.165 Running I/O for 10 seconds... 00:23:08.155 00:23:08.155 Latency(us) 00:23:08.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.155 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:23:08.155 Verification LBA range: start 0x0 length 0x4000 00:23:08.155 NVMe0n1 : 10.05 18265.98 71.35 0.00 0.00 55902.96 11106.63 45806.21 00:23:08.155 =================================================================================================================== 00:23:08.155 Total : 18265.98 71.35 0.00 0.00 55902.96 11106.63 45806.21 00:23:08.416 0 00:23:08.416 20:40:26 -- target/queue_depth.sh@39 -- # killprocess 3582753 00:23:08.416 20:40:26 -- common/autotest_common.sh@926 -- # '[' -z 3582753 ']' 00:23:08.416 20:40:26 -- common/autotest_common.sh@930 -- # kill -0 3582753 00:23:08.416 20:40:26 -- common/autotest_common.sh@931 -- # uname 00:23:08.416 20:40:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:08.416 20:40:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3582753 00:23:08.416 20:40:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:08.416 20:40:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:08.416 20:40:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3582753' 00:23:08.416 killing process with pid 3582753 00:23:08.417 20:40:26 -- common/autotest_common.sh@945 -- # kill 3582753 00:23:08.417 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.417 00:23:08.417 Latency(us) 00:23:08.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.417 =================================================================================================================== 00:23:08.417 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.417 20:40:26 -- common/autotest_common.sh@950 -- # wait 3582753 00:23:08.677 20:40:26 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:23:08.677 20:40:26 -- target/queue_depth.sh@43 -- # nvmftestfini 00:23:08.677 20:40:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:08.677 20:40:26 -- nvmf/common.sh@116 -- # sync 00:23:08.677 20:40:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:08.677 20:40:26 -- nvmf/common.sh@119 -- # set +e 00:23:08.677 20:40:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:08.677 20:40:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:08.677 rmmod nvme_tcp 00:23:08.677 rmmod nvme_fabrics 00:23:08.677 rmmod nvme_keyring 00:23:08.677 20:40:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:08.677 20:40:26 -- nvmf/common.sh@123 -- # set -e 00:23:08.677 20:40:26 -- nvmf/common.sh@124 -- # return 0 00:23:08.677 20:40:26 -- nvmf/common.sh@477 -- # '[' -n 3582641 ']' 00:23:08.677 20:40:26 -- nvmf/common.sh@478 -- # killprocess 3582641 00:23:08.677 20:40:26 -- common/autotest_common.sh@926 -- # '[' -z 3582641 ']' 00:23:08.677 20:40:26 -- common/autotest_common.sh@930 -- # kill -0 3582641 00:23:08.677 20:40:27 -- common/autotest_common.sh@931 -- # uname 00:23:08.677 20:40:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:08.677 20:40:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3582641 00:23:08.938 20:40:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:08.938 20:40:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:08.938 20:40:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3582641' 00:23:08.938 killing process with pid 3582641 00:23:08.938 20:40:27 -- common/autotest_common.sh@945 -- # kill 3582641 00:23:08.938 20:40:27 -- common/autotest_common.sh@950 -- # wait 3582641 00:23:09.506 20:40:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:09.506 20:40:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:09.506 20:40:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:09.506 20:40:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.506 20:40:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:09.506 20:40:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.506 20:40:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.506 20:40:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.413 20:40:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:11.413 00:23:11.413 real 0m20.935s 00:23:11.413 user 0m25.553s 00:23:11.413 sys 0m5.592s 00:23:11.413 20:40:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.413 20:40:29 -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 ************************************ 00:23:11.413 END TEST nvmf_queue_depth 00:23:11.413 ************************************ 00:23:11.413 20:40:29 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:23:11.413 20:40:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:11.413 20:40:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:11.413 20:40:29 -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 ************************************ 00:23:11.413 START TEST nvmf_multipath 00:23:11.413 ************************************ 00:23:11.413 20:40:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:23:11.413 * Looking for test storage... 00:23:11.413 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:11.413 20:40:29 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.413 20:40:29 -- nvmf/common.sh@7 -- # uname -s 00:23:11.413 20:40:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.413 20:40:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.413 20:40:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.413 20:40:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.413 20:40:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.413 20:40:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.413 20:40:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.413 20:40:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.413 20:40:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.413 20:40:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.413 20:40:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:23:11.413 20:40:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:23:11.413 20:40:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.413 20:40:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.413 20:40:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:11.413 20:40:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:11.413 20:40:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.413 20:40:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.413 20:40:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.414 20:40:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.414 20:40:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.414 20:40:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.414 20:40:29 -- paths/export.sh@5 -- # export PATH 00:23:11.414 20:40:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.414 20:40:29 -- nvmf/common.sh@46 -- # : 0 00:23:11.414 20:40:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:11.414 20:40:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:11.414 20:40:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:11.414 20:40:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.414 20:40:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.414 20:40:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:11.414 20:40:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:11.414 20:40:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:11.414 20:40:29 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.414 20:40:29 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.414 20:40:29 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:11.414 20:40:29 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:11.414 20:40:29 -- target/multipath.sh@43 -- # nvmftestinit 00:23:11.414 20:40:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:11.414 20:40:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.414 20:40:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:11.414 20:40:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:11.414 20:40:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:11.414 20:40:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.414 20:40:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.414 20:40:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.414 20:40:29 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:11.414 20:40:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:11.414 20:40:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:11.414 20:40:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.688 20:40:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:16.688 20:40:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:16.688 20:40:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:16.688 20:40:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:16.688 20:40:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:16.688 20:40:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:16.688 20:40:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:16.688 20:40:34 -- nvmf/common.sh@294 -- # net_devs=() 00:23:16.688 20:40:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:16.688 20:40:34 -- nvmf/common.sh@295 -- # e810=() 00:23:16.688 20:40:34 -- nvmf/common.sh@295 -- # local -ga e810 00:23:16.688 20:40:34 -- nvmf/common.sh@296 -- # x722=() 00:23:16.688 20:40:34 -- nvmf/common.sh@296 -- # local -ga x722 00:23:16.688 20:40:34 -- nvmf/common.sh@297 -- # mlx=() 00:23:16.688 20:40:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:16.688 20:40:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.688 20:40:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:16.688 20:40:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:16.688 20:40:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.688 20:40:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:16.688 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:16.688 20:40:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.688 20:40:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:16.688 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:16.688 20:40:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:16.688 20:40:34 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.688 20:40:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.688 20:40:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.688 20:40:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.688 20:40:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:16.688 Found net devices under 0000:27:00.0: cvl_0_0 00:23:16.688 20:40:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.688 20:40:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.688 20:40:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.688 20:40:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.688 20:40:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.688 20:40:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:16.688 Found net devices under 0000:27:00.1: cvl_0_1 00:23:16.688 20:40:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.688 20:40:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:16.688 20:40:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:16.688 20:40:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:16.688 20:40:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:16.688 20:40:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.688 20:40:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.688 20:40:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.688 20:40:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:16.688 20:40:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.688 20:40:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.688 20:40:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:16.688 20:40:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.688 20:40:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.688 20:40:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:16.688 20:40:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:16.688 20:40:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.688 20:40:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.948 20:40:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.948 20:40:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.948 20:40:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:16.948 20:40:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.948 20:40:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.948 20:40:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.948 20:40:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:16.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:23:16.948 00:23:16.948 --- 10.0.0.2 ping statistics --- 00:23:16.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.948 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:16.948 20:40:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:16.948 00:23:16.948 --- 10.0.0.1 ping statistics --- 00:23:16.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.948 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:16.948 20:40:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.948 20:40:35 -- nvmf/common.sh@410 -- # return 0 00:23:16.948 20:40:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:16.948 20:40:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.948 20:40:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:16.948 20:40:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:16.948 20:40:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.948 20:40:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:16.948 20:40:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:16.948 20:40:35 -- target/multipath.sh@45 -- # '[' -z ']' 00:23:16.948 20:40:35 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:23:16.948 only one NIC for nvmf test 00:23:16.948 20:40:35 -- target/multipath.sh@47 -- # nvmftestfini 00:23:16.948 20:40:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:16.948 20:40:35 -- nvmf/common.sh@116 -- # sync 00:23:16.948 20:40:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:16.948 20:40:35 -- nvmf/common.sh@119 -- # set +e 00:23:16.948 20:40:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:16.948 20:40:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:16.948 rmmod nvme_tcp 00:23:16.948 rmmod nvme_fabrics 00:23:16.948 rmmod nvme_keyring 00:23:16.948 20:40:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:16.948 20:40:35 -- nvmf/common.sh@123 -- # set -e 00:23:16.948 20:40:35 -- nvmf/common.sh@124 -- # return 0 00:23:16.948 20:40:35 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:23:16.948 20:40:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:16.948 20:40:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:16.948 20:40:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:16.948 20:40:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.948 20:40:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:16.948 20:40:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.948 20:40:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.948 20:40:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.487 20:40:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:19.487 20:40:37 -- target/multipath.sh@48 -- # exit 0 00:23:19.487 20:40:37 -- target/multipath.sh@1 -- # nvmftestfini 00:23:19.487 20:40:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:19.487 20:40:37 -- nvmf/common.sh@116 -- # sync 00:23:19.487 20:40:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:19.487 20:40:37 -- nvmf/common.sh@119 -- # set +e 00:23:19.487 20:40:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:19.487 20:40:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:19.487 20:40:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:19.487 20:40:37 -- nvmf/common.sh@123 -- # set -e 00:23:19.487 20:40:37 -- nvmf/common.sh@124 -- # return 0 00:23:19.487 20:40:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:23:19.487 20:40:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:19.487 20:40:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:19.487 20:40:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:19.487 20:40:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.487 20:40:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:19.487 20:40:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.487 20:40:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.487 20:40:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.487 20:40:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:19.487 00:23:19.487 real 0m7.654s 00:23:19.487 user 0m1.509s 00:23:19.487 sys 0m4.073s 00:23:19.487 20:40:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.487 20:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:19.487 ************************************ 00:23:19.487 END TEST nvmf_multipath 00:23:19.487 ************************************ 00:23:19.487 20:40:37 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:23:19.487 20:40:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:19.487 20:40:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:19.487 20:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:19.487 ************************************ 00:23:19.487 START TEST nvmf_zcopy 00:23:19.487 ************************************ 00:23:19.487 20:40:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:23:19.487 * Looking for test storage... 00:23:19.487 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:19.487 20:40:37 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.487 20:40:37 -- nvmf/common.sh@7 -- # uname -s 00:23:19.487 20:40:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.487 20:40:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.487 20:40:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.487 20:40:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.487 20:40:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.487 20:40:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.487 20:40:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.487 20:40:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.487 20:40:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.487 20:40:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.487 20:40:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:23:19.487 20:40:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:23:19.487 20:40:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.487 20:40:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.487 20:40:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:19.487 20:40:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:19.487 20:40:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.487 20:40:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.487 20:40:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.487 20:40:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.487 20:40:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.487 20:40:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.487 20:40:37 -- paths/export.sh@5 -- # export PATH 00:23:19.487 20:40:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.487 20:40:37 -- nvmf/common.sh@46 -- # : 0 00:23:19.487 20:40:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:19.487 20:40:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:19.487 20:40:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:19.487 20:40:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.487 20:40:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.487 20:40:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:19.487 20:40:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:19.487 20:40:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:19.487 20:40:37 -- target/zcopy.sh@12 -- # nvmftestinit 00:23:19.487 20:40:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:19.487 20:40:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.487 20:40:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:19.487 20:40:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:19.487 20:40:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:19.487 20:40:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.487 20:40:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.487 20:40:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.487 20:40:37 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:19.487 20:40:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:19.487 20:40:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:19.487 20:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:26.064 20:40:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:26.064 20:40:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:26.064 20:40:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:26.064 20:40:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:26.064 20:40:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:26.064 20:40:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:26.064 20:40:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:26.064 20:40:43 -- nvmf/common.sh@294 -- # net_devs=() 00:23:26.064 20:40:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:26.064 20:40:43 -- nvmf/common.sh@295 -- # e810=() 00:23:26.064 20:40:43 -- nvmf/common.sh@295 -- # local -ga e810 00:23:26.064 20:40:43 -- nvmf/common.sh@296 -- # x722=() 00:23:26.064 20:40:43 -- nvmf/common.sh@296 -- # local -ga x722 00:23:26.064 20:40:43 -- nvmf/common.sh@297 -- # mlx=() 00:23:26.064 20:40:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:26.064 20:40:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.064 20:40:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:26.064 20:40:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:26.064 20:40:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:26.064 20:40:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:26.064 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:26.064 20:40:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:26.064 20:40:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:26.064 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:26.064 20:40:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:26.064 20:40:43 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:26.064 20:40:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.064 20:40:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:26.064 20:40:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.064 20:40:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:26.064 Found net devices under 0000:27:00.0: cvl_0_0 00:23:26.064 20:40:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.064 20:40:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:26.064 20:40:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.064 20:40:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:26.064 20:40:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.064 20:40:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:26.064 Found net devices under 0000:27:00.1: cvl_0_1 00:23:26.064 20:40:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.064 20:40:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:26.064 20:40:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:26.064 20:40:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:26.064 20:40:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:26.065 20:40:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:26.065 20:40:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.065 20:40:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.065 20:40:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.065 20:40:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:26.065 20:40:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.065 20:40:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.065 20:40:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:26.065 20:40:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.065 20:40:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.065 20:40:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:26.065 20:40:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:26.065 20:40:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.065 20:40:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.065 20:40:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.065 20:40:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.065 20:40:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:26.065 20:40:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.065 20:40:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.065 20:40:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.065 20:40:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:26.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:23:26.065 00:23:26.065 --- 10.0.0.2 ping statistics --- 00:23:26.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.065 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:23:26.065 20:40:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:23:26.065 00:23:26.065 --- 10.0.0.1 ping statistics --- 00:23:26.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.065 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:23:26.065 20:40:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.065 20:40:43 -- nvmf/common.sh@410 -- # return 0 00:23:26.065 20:40:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:26.065 20:40:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.065 20:40:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:26.065 20:40:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:26.065 20:40:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.065 20:40:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:26.065 20:40:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:26.065 20:40:43 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:23:26.065 20:40:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:26.065 20:40:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:26.065 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:23:26.065 20:40:43 -- nvmf/common.sh@469 -- # nvmfpid=3592963 00:23:26.065 20:40:43 -- nvmf/common.sh@470 -- # waitforlisten 3592963 00:23:26.065 20:40:43 -- common/autotest_common.sh@819 -- # '[' -z 3592963 ']' 00:23:26.065 20:40:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.065 20:40:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.065 20:40:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:26.065 20:40:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.065 20:40:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:26.065 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:23:26.065 [2024-04-26 20:40:43.663458] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:26.065 [2024-04-26 20:40:43.663589] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.065 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.065 [2024-04-26 20:40:43.801526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.065 [2024-04-26 20:40:43.900437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:26.065 [2024-04-26 20:40:43.900653] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.065 [2024-04-26 20:40:43.900674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.065 [2024-04-26 20:40:43.900684] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.065 [2024-04-26 20:40:43.900716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.065 20:40:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:26.065 20:40:44 -- common/autotest_common.sh@852 -- # return 0 00:23:26.065 20:40:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:26.065 20:40:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:26.065 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.327 20:40:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.327 20:40:44 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:23:26.327 20:40:44 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:23:26.327 20:40:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.327 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.327 [2024-04-26 20:40:44.427587] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.327 20:40:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.327 20:40:44 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:26.327 20:40:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.327 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.327 20:40:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.327 20:40:44 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.327 20:40:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.327 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.327 [2024-04-26 20:40:44.447794] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.327 20:40:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.327 20:40:44 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:26.327 20:40:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.327 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.327 20:40:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.327 20:40:44 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:23:26.327 20:40:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.327 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.327 malloc0 00:23:26.327 20:40:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.327 20:40:44 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.327 20:40:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.327 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.327 20:40:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.327 20:40:44 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:23:26.327 20:40:44 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:23:26.327 20:40:44 -- nvmf/common.sh@520 -- # config=() 00:23:26.327 20:40:44 -- nvmf/common.sh@520 -- # local subsystem config 00:23:26.327 20:40:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.327 20:40:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.327 { 00:23:26.327 "params": { 00:23:26.327 "name": "Nvme$subsystem", 00:23:26.327 "trtype": "$TEST_TRANSPORT", 00:23:26.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.327 "adrfam": "ipv4", 00:23:26.327 "trsvcid": "$NVMF_PORT", 00:23:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.327 "hdgst": ${hdgst:-false}, 00:23:26.327 "ddgst": ${ddgst:-false} 00:23:26.327 }, 00:23:26.327 "method": "bdev_nvme_attach_controller" 00:23:26.327 } 00:23:26.327 EOF 00:23:26.327 )") 00:23:26.327 20:40:44 -- nvmf/common.sh@542 -- # cat 00:23:26.327 20:40:44 -- nvmf/common.sh@544 -- # jq . 00:23:26.327 20:40:44 -- nvmf/common.sh@545 -- # IFS=, 00:23:26.327 20:40:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:26.327 "params": { 00:23:26.327 "name": "Nvme1", 00:23:26.327 "trtype": "tcp", 00:23:26.327 "traddr": "10.0.0.2", 00:23:26.327 "adrfam": "ipv4", 00:23:26.327 "trsvcid": "4420", 00:23:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.327 "hdgst": false, 00:23:26.327 "ddgst": false 00:23:26.327 }, 00:23:26.327 "method": "bdev_nvme_attach_controller" 00:23:26.327 }' 00:23:26.327 [2024-04-26 20:40:44.580608] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:26.327 [2024-04-26 20:40:44.580728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593275 ] 00:23:26.327 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.588 [2024-04-26 20:40:44.697630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.588 [2024-04-26 20:40:44.787338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.849 Running I/O for 10 seconds... 00:23:36.851 00:23:36.851 Latency(us) 00:23:36.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:23:36.851 Verification LBA range: start 0x0 length 0x1000 00:23:36.851 Nvme1n1 : 10.01 13128.55 102.57 0.00 0.00 9727.59 1379.71 15935.60 00:23:36.851 =================================================================================================================== 00:23:36.851 Total : 13128.55 102.57 0.00 0.00 9727.59 1379.71 15935.60 00:23:37.111 20:40:55 -- target/zcopy.sh@39 -- # perfpid=3595385 00:23:37.111 20:40:55 -- target/zcopy.sh@41 -- # xtrace_disable 00:23:37.112 20:40:55 -- common/autotest_common.sh@10 -- # set +x 00:23:37.112 20:40:55 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:23:37.112 20:40:55 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:23:37.112 20:40:55 -- nvmf/common.sh@520 -- # config=() 00:23:37.112 20:40:55 -- nvmf/common.sh@520 -- # local subsystem config 00:23:37.112 20:40:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:37.112 20:40:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:37.112 { 00:23:37.112 "params": { 00:23:37.112 "name": "Nvme$subsystem", 00:23:37.112 "trtype": "$TEST_TRANSPORT", 00:23:37.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.112 "adrfam": "ipv4", 00:23:37.112 "trsvcid": "$NVMF_PORT", 00:23:37.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.112 "hdgst": ${hdgst:-false}, 00:23:37.112 "ddgst": ${ddgst:-false} 00:23:37.112 }, 00:23:37.112 "method": "bdev_nvme_attach_controller" 00:23:37.112 } 00:23:37.112 EOF 00:23:37.112 )") 00:23:37.112 [2024-04-26 20:40:55.408912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.112 [2024-04-26 20:40:55.408973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.112 20:40:55 -- nvmf/common.sh@542 -- # cat 00:23:37.112 20:40:55 -- nvmf/common.sh@544 -- # jq . 00:23:37.112 [2024-04-26 20:40:55.416824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.112 [2024-04-26 20:40:55.416857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.112 20:40:55 -- nvmf/common.sh@545 -- # IFS=, 00:23:37.112 20:40:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:37.112 "params": { 00:23:37.112 "name": "Nvme1", 00:23:37.112 "trtype": "tcp", 00:23:37.112 "traddr": "10.0.0.2", 00:23:37.112 "adrfam": "ipv4", 00:23:37.112 "trsvcid": "4420", 00:23:37.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.112 "hdgst": false, 00:23:37.112 "ddgst": false 00:23:37.112 }, 00:23:37.112 "method": "bdev_nvme_attach_controller" 00:23:37.112 }' 00:23:37.112 [2024-04-26 20:40:55.424782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.112 [2024-04-26 20:40:55.424802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.112 [2024-04-26 20:40:55.432796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.112 [2024-04-26 20:40:55.432814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.112 [2024-04-26 20:40:55.440789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.112 [2024-04-26 20:40:55.440806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.112 [2024-04-26 20:40:55.448776] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.112 [2024-04-26 20:40:55.448794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.456790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.456808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.464789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.464805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.472800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.472818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.473660] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:37.374 [2024-04-26 20:40:55.473779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595385 ] 00:23:37.374 [2024-04-26 20:40:55.480793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.480808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.488780] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.488795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.496795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.496810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.504797] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.504812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.512788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.512803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.520796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.520811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.528802] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.528818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.536796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.536810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.544809] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.544823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.374 [2024-04-26 20:40:55.552796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.552810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.560807] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.560822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.568812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.568827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.576803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.576819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.584814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.584829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.588974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.374 [2024-04-26 20:40:55.592817] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.592832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.600810] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.600831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.608819] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.608833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.616821] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.616835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.624826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.624842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.632831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.632846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.640821] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.640836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.648900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.648915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.656830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.656845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.664841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.664856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.672836] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.672851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.678786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.374 [2024-04-26 20:40:55.680833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.680850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.688840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.688854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.696847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.696862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.704837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.704852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.374 [2024-04-26 20:40:55.712847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.374 [2024-04-26 20:40:55.712862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.720848] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.720865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.728844] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.728859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.736854] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.736870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.744851] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.744866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.752859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.752874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.760877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.760892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.768852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.768867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.776874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.776888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.784868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.784883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.792865] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.792880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.800876] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.800891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.808867] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.808881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.816882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.816897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.824907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.824932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.832890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.832914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.840907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.840929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.848915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.848938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.856912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.856927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.864908] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.864924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.872907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.872923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.880916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.880932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.888921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.888939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.896916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.635 [2024-04-26 20:40:55.896935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.635 [2024-04-26 20:40:55.904931] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.904952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.636 [2024-04-26 20:40:55.912929] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.912945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.636 [2024-04-26 20:40:55.920918] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.920934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.636 [2024-04-26 20:40:55.928938] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.928955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.636 [2024-04-26 20:40:55.936921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.936939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.636 [2024-04-26 20:40:55.944949] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.944970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.636 [2024-04-26 20:40:55.952946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.952962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.636 [2024-04-26 20:40:55.960928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.960944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.636 [2024-04-26 20:40:55.968942] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.636 [2024-04-26 20:40:55.968958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:55.976944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:55.976961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:55.984933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:55.984954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:55.992955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:55.992974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.000945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.000962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.008954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.008972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.016966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.016984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.024952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.024967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.032964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.032982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.040961] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.040976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.049003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.049031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.056981] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.057001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 Running I/O for 5 seconds... 00:23:37.895 [2024-04-26 20:40:56.069350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.069388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.080400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.080433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.089104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.089134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.098402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.098432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.107193] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.107226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.116229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.116260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.125290] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.125320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.134284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.134313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.143151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.143182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.152641] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.152672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.161146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.161178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.170086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.170114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.178807] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.178838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.187921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.187951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.196922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.196950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.205697] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.205726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.214401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.214429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.223964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.223992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:37.895 [2024-04-26 20:40:56.233133] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:37.895 [2024-04-26 20:40:56.233162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.242060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.242088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.250280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.250310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.259267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.259295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.268890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.268919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.278067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.278096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.286944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.286972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.296340] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.296369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.305252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.305282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.314393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.314421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.324059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.324088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.332575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.332607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.341943] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.341972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.351288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.351317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.360664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.360692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.369789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.369818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.378789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.378817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.387974] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.388004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.396434] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.396466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.406001] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.406030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.415179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.415211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.424302] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.424332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.433372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.433406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.442861] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.442890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.451674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.451703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.460663] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.460691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.470211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.154 [2024-04-26 20:40:56.470241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.154 [2024-04-26 20:40:56.479062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.155 [2024-04-26 20:40:56.479091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.155 [2024-04-26 20:40:56.488230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.155 [2024-04-26 20:40:56.488264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.496761] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.496793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.506578] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.506606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.516020] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.516047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.524784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.524814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.533630] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.533659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.543499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.543528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.553126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.553156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.561579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.561609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.570357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.570390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.579942] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.579970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.589216] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.589245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.598219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.598247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.607231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.607259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.616061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.616092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.625570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.625601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.634856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.634885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.644326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.644357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.652894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.652923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.662361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.662399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.413 [2024-04-26 20:40:56.671661] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.413 [2024-04-26 20:40:56.671689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.414 [2024-04-26 20:40:56.680653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.414 [2024-04-26 20:40:56.680681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.414 [2024-04-26 20:40:56.689765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.414 [2024-04-26 20:40:56.689795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.414 [2024-04-26 20:40:56.699451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.414 [2024-04-26 20:40:56.699480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.414 [2024-04-26 20:40:56.708970] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.414 [2024-04-26 20:40:56.709002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.414 [2024-04-26 20:40:56.718743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.414 [2024-04-26 20:40:56.718773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.414 [2024-04-26 20:40:56.727704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.414 [2024-04-26 20:40:56.727734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.414 [2024-04-26 20:40:56.737062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.414 [2024-04-26 20:40:56.737090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.414 [2024-04-26 20:40:56.745830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.414 [2024-04-26 20:40:56.745859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.754753] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.754781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.764122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.764152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.773086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.773115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.781964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.781994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.790784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.790813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.800095] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.800125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.808509] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.808537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.817425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.817453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.826926] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.826954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.673 [2024-04-26 20:40:56.836053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.673 [2024-04-26 20:40:56.836087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.845433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.845462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.854446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.854474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.863942] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.863971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.872931] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.872958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.881958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.881987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.891639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.891667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.900706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.900735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.909551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.909579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.919113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.919140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.928129] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.928159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.937143] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.937171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.945993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.946022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.955013] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.955043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.964415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.964445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.973464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.973492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.982602] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.982630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:56.992107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:56.992136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:57.001947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:57.001975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.674 [2024-04-26 20:40:57.010537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.674 [2024-04-26 20:40:57.010569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.019610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.019640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.029674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.029703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.038452] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.038481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.047377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.047409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.056528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.056556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.065394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.065422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.074493] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.074521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.084504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.084532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.093218] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.093246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.102927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.102956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.112083] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.112114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.121059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.121087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.130528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.130558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.140090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.140117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.149210] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.149238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.158999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.159029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.167567] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.167594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.176748] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.176776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.186273] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.186306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.195559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.195586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.204551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.204578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.213617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.213647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.223339] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.223368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.232648] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.232677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.241989] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.242019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.250573] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.250602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.260027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.260054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:38.935 [2024-04-26 20:40:57.269078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:38.935 [2024-04-26 20:40:57.269109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.278299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.278329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.288037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.288066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.296996] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.297030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.306549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.306579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.315525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.315554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.324858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.324887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.334475] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.334505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.344017] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.344048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.353256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.353287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.362201] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.362234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.371052] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.371081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.380516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.380544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.389645] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.389676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.399110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.399141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.408327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.408360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.417947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.417977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.427548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.427578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.437165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.437194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.446226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.446253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.455272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.455301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.464087] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.464115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.473584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.473611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.483115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.483144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.491769] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.491797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.501286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.501317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.510884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.510913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.519502] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.519530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.197 [2024-04-26 20:40:57.529151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.197 [2024-04-26 20:40:57.529180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.538682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.538713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.547671] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.547701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.556644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.556674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.566137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.566167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.575408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.575438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.584192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.584222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.593615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.593644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.602620] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.602652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.612365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.612398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.621293] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.621322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.630412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.630442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.639925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.639954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.649068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.649096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.658713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.658742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.668223] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.668250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.676637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.676667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.685508] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.685537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.695229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.695259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.704398] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.704428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.713453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.713483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.722857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.722886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.731951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.731979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.741260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.741290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.750372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.750548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.759954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.759982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.768933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.768963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.777703] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.459 [2024-04-26 20:40:57.777731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.459 [2024-04-26 20:40:57.787182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.460 [2024-04-26 20:40:57.787213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.460 [2024-04-26 20:40:57.796940] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.460 [2024-04-26 20:40:57.796969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.806171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.806202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.815549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.815579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.825110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.825139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.834194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.834224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.843718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.843746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.853438] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.853467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.862431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.862459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.871966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.871998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.881046] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.881077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.890470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.890501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.899604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.899634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.908878] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.908908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.918403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.918435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.927399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.927428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.936957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.936986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.946554] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.946584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.956161] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.956191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.965016] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.965048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.972793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.972827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.983638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.983667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:57.992581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:57.992611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:58.002205] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:58.002237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:58.011309] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:58.011338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:58.020933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.719 [2024-04-26 20:40:58.020965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.719 [2024-04-26 20:40:58.029989] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.720 [2024-04-26 20:40:58.030020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.720 [2024-04-26 20:40:58.039699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.720 [2024-04-26 20:40:58.039729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.720 [2024-04-26 20:40:58.049188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.720 [2024-04-26 20:40:58.049219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:39.720 [2024-04-26 20:40:58.058319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:39.720 [2024-04-26 20:40:58.058349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.067853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.067887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.075648] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.075677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.086281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.086314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.095750] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.095783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.105515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.105546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.115329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.115359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.125070] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.125101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.133777] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.133809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.143217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.143248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.152754] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.152783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.161857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.161888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.170928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.170958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.179945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.179973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.189347] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.189376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.198542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.198573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.207405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.207434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.216394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.216424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.225986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.226015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.235361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.235401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.244626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.244656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.254264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.254294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.262772] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.262799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.272115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.272145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.281274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.281304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.289762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.289791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.298742] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.298771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.307808] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.307838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.317340] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.317369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.326826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.326854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.029 [2024-04-26 20:40:58.336473] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.029 [2024-04-26 20:40:58.336502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.289 [2024-04-26 20:40:58.345582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.289 [2024-04-26 20:40:58.345612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.289 [2024-04-26 20:40:58.354561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.289 [2024-04-26 20:40:58.354589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.289 [2024-04-26 20:40:58.364624] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.289 [2024-04-26 20:40:58.364656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.289 [2024-04-26 20:40:58.373796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.289 [2024-04-26 20:40:58.373828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.289 [2024-04-26 20:40:58.383269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.289 [2024-04-26 20:40:58.383299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.393106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.393138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.402706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.402734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.412190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.412224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.421924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.421955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.431075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.431104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.440284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.440314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.449923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.449953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.459457] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.459485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.469120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.469148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.477618] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.477647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.487071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.487099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.496107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.496137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.505428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.505457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.514298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.514329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.523261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.523290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.532421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.532449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.541236] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.541267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.550174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.550202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.559371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.559402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.568542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.568572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.577951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.577979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.586368] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.586408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.595649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.595679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.605128] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.605156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.614267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.614296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.290 [2024-04-26 20:40:58.623197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.290 [2024-04-26 20:40:58.623228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.632171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.632199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.641154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.641184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.650557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.650586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.659984] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.660012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.669293] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.669323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.678107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.678134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.687656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.687686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.696609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.696638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.706132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.706161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.714549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.714576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.724152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.724181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.733368] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.733400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.742297] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.742325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.751760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.751790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.761408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.761442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.770834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.770862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.779987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.549 [2024-04-26 20:40:58.780016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.549 [2024-04-26 20:40:58.788914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.788941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.798393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.798421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.806924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.806952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.816551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.816580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.825571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.825600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.835217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.835245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.844260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.844289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.853673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.853703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.862687] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.862714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.872316] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.872346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.881701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.881729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.550 [2024-04-26 20:40:58.890128] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.550 [2024-04-26 20:40:58.890158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.899376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.899408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.908744] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.908773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.917154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.917183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.926647] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.926677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.935886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.935919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.944951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.944982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.954409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.954438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.964041] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.964069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.973501] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.973531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.982699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.982727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:58.992160] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:58.992189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.000648] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.000677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.010165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.010195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.019783] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.019812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.029497] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.029526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.039178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.039207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.048553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.048583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.057865] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.057894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.066944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.066973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.075879] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.075908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.085512] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.085540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.094547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.094575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.104045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.104076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.113691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.113726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.123397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.123424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.132882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.132911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:40.810 [2024-04-26 20:40:59.142316] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:40.810 [2024-04-26 20:40:59.142345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.151287] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.151317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.160771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.160801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.169215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.169243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.178868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.178897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.187970] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.187997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.196884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.196911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.205945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.205975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.215976] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.216008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.225541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.225579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.234849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.234879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.072 [2024-04-26 20:40:59.244278] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.072 [2024-04-26 20:40:59.244307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.254124] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.254157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.263276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.263307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.272458] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.272488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.281634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.281663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.290963] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.290993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.299500] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.299530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.308424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.308453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.317924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.317954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.327318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.327350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.336514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.336544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.345773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.345802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.354161] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.354190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.363709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.363738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.372032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.372061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.380965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.380994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.390515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.390545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.399800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.399833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.073 [2024-04-26 20:40:59.409513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.073 [2024-04-26 20:40:59.409544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.419053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.419085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.428904] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.428935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.438103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.438133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.447086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.447118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.456351] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.456387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.465841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.465869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.475053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.475082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.484295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.484327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.493267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.493310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.500601] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.500632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.510976] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.511007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.525424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.525455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.533853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.533885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.542838] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.542867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.551521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.551551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.561104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.561134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.570549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.570579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.579220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.579250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.588279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.588309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.597417] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.597448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.606541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.606572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.616257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.616286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.624890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.624921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.633924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.633963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.643135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.643165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.652571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.652601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.661323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.661355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.333 [2024-04-26 20:40:59.670982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.333 [2024-04-26 20:40:59.671013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.680139] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.680171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.689095] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.689124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.698277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.698312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.707705] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.707737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.717306] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.717337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.725869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.725898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.734941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.734973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.743974] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.744005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.753697] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.753727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.762728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.762760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.771942] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.771973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.781551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.781582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.791370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.791409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.800568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.800598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.810021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.810056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.819242] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.819275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.828794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.828825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.838002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.838031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.846927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.846957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.856346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.856377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.865221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.865250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.874952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.874984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.884743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.884774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.894318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.894347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.903992] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.904024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.913582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.913613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.923210] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.923240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.593 [2024-04-26 20:40:59.932282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.593 [2024-04-26 20:40:59.932312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:40:59.941917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:40:59.941949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:40:59.950977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:40:59.951007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:40:59.960480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:40:59.960510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:40:59.969470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:40:59.969501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:40:59.977982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:40:59.978013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:40:59.987117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:40:59.987152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:40:59.996113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:40:59.996144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.005676] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.005716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.013559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.013593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.024605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.024637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.033344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.033377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.042889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.042921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.052483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.052513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.061111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.061140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.070769] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.070798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.079862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.079892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.087248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.087280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.097960] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.097991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.107127] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.107157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.116824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.116854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.125223] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.125251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.134142] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.134170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.143203] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.143231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.851 [2024-04-26 20:41:00.152576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.851 [2024-04-26 20:41:00.152605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.852 [2024-04-26 20:41:00.162134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.852 [2024-04-26 20:41:00.162170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.852 [2024-04-26 20:41:00.171826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.852 [2024-04-26 20:41:00.171855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.852 [2024-04-26 20:41:00.180939] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.852 [2024-04-26 20:41:00.180970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:41.852 [2024-04-26 20:41:00.190372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:41.852 [2024-04-26 20:41:00.190404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.199395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.199425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.208665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.208694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.217975] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.218003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.226935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.226962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.236357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.236389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.245354] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.245388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.254350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.254385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.263079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.263111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.272254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.272284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.281324] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.281353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.290864] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.290893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.299415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.299443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.308974] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.309003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.318033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.318062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.326829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.326858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.336279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.336314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.346021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.346051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.354600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.354627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.363710] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.363738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.372635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.109 [2024-04-26 20:41:00.372665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.109 [2024-04-26 20:41:00.382084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.110 [2024-04-26 20:41:00.382113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.110 [2024-04-26 20:41:00.391158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.110 [2024-04-26 20:41:00.391186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.110 [2024-04-26 20:41:00.400015] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.110 [2024-04-26 20:41:00.400045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.110 [2024-04-26 20:41:00.409030] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.110 [2024-04-26 20:41:00.409061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.110 [2024-04-26 20:41:00.418577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.110 [2024-04-26 20:41:00.418606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.110 [2024-04-26 20:41:00.428103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.110 [2024-04-26 20:41:00.428132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.110 [2024-04-26 20:41:00.437125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.110 [2024-04-26 20:41:00.437152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.110 [2024-04-26 20:41:00.446562] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.110 [2024-04-26 20:41:00.446592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.455648] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.455676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.465119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.465148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.474180] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.474209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.483640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.483669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.492725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.492758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.502237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.502264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.511339] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.511370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.520430] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.520461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.529800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.529827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.539321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.539351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.548572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.548600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.558165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.558193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.567425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.567455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.577039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.577067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.586969] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.587000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.595975] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.596004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.604790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.604818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.614373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.614409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.623522] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.623551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.632916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.632944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.642347] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.642376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.651887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.651917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.660993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.661020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.670504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.670532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.680041] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.680069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.689447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.689476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.698363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.698395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.367 [2024-04-26 20:41:00.707873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.367 [2024-04-26 20:41:00.707903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.717000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.717031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.726396] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.726425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.735736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.735763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.744727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.744757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.753451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.753479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.762660] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.762688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.772129] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.772159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.781578] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.781604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.790552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.790579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.799559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.799586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.808395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.808425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.817408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.817438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.826168] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.826196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.835155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.835182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.844803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.844833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.854515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.854545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.863588] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.863617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.872737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.872766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.881709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.881737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.890719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.890748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.900204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.900232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.909197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.909226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.918843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.918871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.626 [2024-04-26 20:41:00.928352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.626 [2024-04-26 20:41:00.928384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.627 [2024-04-26 20:41:00.938083] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.627 [2024-04-26 20:41:00.938113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.627 [2024-04-26 20:41:00.947243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.627 [2024-04-26 20:41:00.947270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.627 [2024-04-26 20:41:00.956156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.627 [2024-04-26 20:41:00.956184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:42.627 [2024-04-26 20:41:00.965643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:42.627 [2024-04-26 20:41:00.965672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:00.975247] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:00.975274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:00.983789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:00.983816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:00.993271] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:00.993300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.001669] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.001696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.010763] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.010793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.019582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.019612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.028935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.028969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.037973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.038006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.047234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.047263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.055814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.055844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.062361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.062395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 00:23:43.114 Latency(us) 00:23:43.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.114 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:23:43.114 Nvme1n1 : 5.00 17382.90 135.80 0.00 0.00 7357.84 3035.35 16349.51 00:23:43.114 =================================================================================================================== 00:23:43.114 Total : 17382.90 135.80 0.00 0.00 7357.84 3035.35 16349.51 00:23:43.114 [2024-04-26 20:41:01.070340] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.070365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.078346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.078368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.086329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.086344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.094349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.094365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.102360] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.102378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.110331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.110348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.118397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.118412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.126346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.126363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.134351] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.134367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.142349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.142365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.150344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.150360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.158362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.158390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.166354] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.166370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.174351] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.174368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.182357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.182374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.190365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.190389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.198357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.198373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.206365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.206389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.214360] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.214376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.222372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.222394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.230391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.230408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.238366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.238390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.246379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.246401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.254391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.254407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.262375] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.262398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.270389] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.270405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.278388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.278404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.114 [2024-04-26 20:41:01.286395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.114 [2024-04-26 20:41:01.286412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.294417] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.294434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.302397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.302417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.310408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.310429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.318411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.318430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.326417] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.326433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.334413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.334429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.342408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.342424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.350423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.350440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.358419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.358434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.366422] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.366440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.374435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.374453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.382430] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.382447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.390427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.390442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.398439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.398456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.406425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.406440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.414445] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.414462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.422456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.422472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 [2024-04-26 20:41:01.430453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.115 [2024-04-26 20:41:01.430469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3595385) - No such process 00:23:43.115 20:41:01 -- target/zcopy.sh@49 -- # wait 3595385 00:23:43.115 20:41:01 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:43.115 20:41:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.115 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:23:43.115 20:41:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.115 20:41:01 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:43.115 20:41:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.115 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:23:43.115 delay0 00:23:43.115 20:41:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.115 20:41:01 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:23:43.115 20:41:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.115 20:41:01 -- common/autotest_common.sh@10 -- # set +x 00:23:43.374 20:41:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.374 20:41:01 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:23:43.374 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.374 [2024-04-26 20:41:01.586115] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:49.941 [2024-04-26 20:41:07.686728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:49.941 Initializing NVMe Controllers 00:23:49.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.941 Initialization complete. Launching workers. 00:23:49.941 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 48 00:23:49.941 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 335, failed to submit 33 00:23:49.941 success 98, unsuccess 237, failed 0 00:23:49.941 20:41:07 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:23:49.941 20:41:07 -- target/zcopy.sh@60 -- # nvmftestfini 00:23:49.941 20:41:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:49.941 20:41:07 -- nvmf/common.sh@116 -- # sync 00:23:49.941 20:41:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:49.941 20:41:07 -- nvmf/common.sh@119 -- # set +e 00:23:49.941 20:41:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:49.941 20:41:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:49.941 rmmod nvme_tcp 00:23:49.941 rmmod nvme_fabrics 00:23:49.941 rmmod nvme_keyring 00:23:49.941 20:41:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:49.941 20:41:07 -- nvmf/common.sh@123 -- # set -e 00:23:49.942 20:41:07 -- nvmf/common.sh@124 -- # return 0 00:23:49.942 20:41:07 -- nvmf/common.sh@477 -- # '[' -n 3592963 ']' 00:23:49.942 20:41:07 -- nvmf/common.sh@478 -- # killprocess 3592963 00:23:49.942 20:41:07 -- common/autotest_common.sh@926 -- # '[' -z 3592963 ']' 00:23:49.942 20:41:07 -- common/autotest_common.sh@930 -- # kill -0 3592963 00:23:49.942 20:41:07 -- common/autotest_common.sh@931 -- # uname 00:23:49.942 20:41:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:49.942 20:41:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3592963 00:23:49.942 20:41:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:49.942 20:41:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:49.942 20:41:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3592963' 00:23:49.942 killing process with pid 3592963 00:23:49.942 20:41:07 -- common/autotest_common.sh@945 -- # kill 3592963 00:23:49.942 20:41:07 -- common/autotest_common.sh@950 -- # wait 3592963 00:23:50.201 20:41:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:50.201 20:41:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:50.201 20:41:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:50.201 20:41:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.201 20:41:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:50.201 20:41:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.201 20:41:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.201 20:41:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.138 20:41:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:52.138 00:23:52.138 real 0m32.993s 00:23:52.138 user 0m46.726s 00:23:52.138 sys 0m8.257s 00:23:52.138 20:41:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.138 20:41:10 -- common/autotest_common.sh@10 -- # set +x 00:23:52.138 ************************************ 00:23:52.138 END TEST nvmf_zcopy 00:23:52.138 ************************************ 00:23:52.138 20:41:10 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:52.138 20:41:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:52.138 20:41:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:52.138 20:41:10 -- common/autotest_common.sh@10 -- # set +x 00:23:52.138 ************************************ 00:23:52.138 START TEST nvmf_nmic 00:23:52.138 ************************************ 00:23:52.138 20:41:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:52.138 * Looking for test storage... 00:23:52.138 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:52.138 20:41:10 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.138 20:41:10 -- nvmf/common.sh@7 -- # uname -s 00:23:52.138 20:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.138 20:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.138 20:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.138 20:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.138 20:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.138 20:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.138 20:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.138 20:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.138 20:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.138 20:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.138 20:41:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:23:52.138 20:41:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:23:52.138 20:41:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.138 20:41:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.138 20:41:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:52.139 20:41:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:52.139 20:41:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.139 20:41:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.139 20:41:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.139 20:41:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.139 20:41:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.139 20:41:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.139 20:41:10 -- paths/export.sh@5 -- # export PATH 00:23:52.139 20:41:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.139 20:41:10 -- nvmf/common.sh@46 -- # : 0 00:23:52.139 20:41:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:52.139 20:41:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:52.139 20:41:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:52.139 20:41:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.399 20:41:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.399 20:41:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:52.399 20:41:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:52.399 20:41:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:52.399 20:41:10 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:52.399 20:41:10 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:52.399 20:41:10 -- target/nmic.sh@14 -- # nvmftestinit 00:23:52.399 20:41:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:52.399 20:41:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.399 20:41:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:52.399 20:41:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:52.399 20:41:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:52.399 20:41:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.399 20:41:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.399 20:41:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.399 20:41:10 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:52.399 20:41:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:52.399 20:41:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:52.399 20:41:10 -- common/autotest_common.sh@10 -- # set +x 00:23:57.675 20:41:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:57.675 20:41:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:57.675 20:41:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:57.675 20:41:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:57.675 20:41:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:57.675 20:41:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:57.675 20:41:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:57.675 20:41:15 -- nvmf/common.sh@294 -- # net_devs=() 00:23:57.675 20:41:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:57.675 20:41:15 -- nvmf/common.sh@295 -- # e810=() 00:23:57.675 20:41:15 -- nvmf/common.sh@295 -- # local -ga e810 00:23:57.675 20:41:15 -- nvmf/common.sh@296 -- # x722=() 00:23:57.675 20:41:15 -- nvmf/common.sh@296 -- # local -ga x722 00:23:57.675 20:41:15 -- nvmf/common.sh@297 -- # mlx=() 00:23:57.675 20:41:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:57.675 20:41:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.675 20:41:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:57.675 20:41:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:57.675 20:41:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.675 20:41:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:57.675 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:57.675 20:41:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.675 20:41:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:57.675 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:57.675 20:41:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:57.675 20:41:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:57.675 20:41:15 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:57.676 20:41:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.676 20:41:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.676 20:41:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.676 20:41:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.676 20:41:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:57.676 Found net devices under 0000:27:00.0: cvl_0_0 00:23:57.676 20:41:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.676 20:41:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.676 20:41:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.676 20:41:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.676 20:41:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.676 20:41:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:57.676 Found net devices under 0000:27:00.1: cvl_0_1 00:23:57.676 20:41:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.676 20:41:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:57.676 20:41:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:57.676 20:41:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:57.676 20:41:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:57.676 20:41:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:57.676 20:41:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.676 20:41:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.676 20:41:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.676 20:41:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:57.676 20:41:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.676 20:41:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.676 20:41:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:57.676 20:41:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.676 20:41:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.676 20:41:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:57.676 20:41:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:57.676 20:41:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.676 20:41:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.676 20:41:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.937 20:41:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.937 20:41:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:57.937 20:41:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.937 20:41:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.937 20:41:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.937 20:41:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:57.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:23:57.937 00:23:57.937 --- 10.0.0.2 ping statistics --- 00:23:57.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.937 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:23:57.937 20:41:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:23:57.937 00:23:57.937 --- 10.0.0.1 ping statistics --- 00:23:57.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.937 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:23:57.937 20:41:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.937 20:41:16 -- nvmf/common.sh@410 -- # return 0 00:23:57.937 20:41:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:57.937 20:41:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.937 20:41:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:57.937 20:41:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:57.937 20:41:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.937 20:41:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:57.937 20:41:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:57.937 20:41:16 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:57.937 20:41:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:57.937 20:41:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:57.937 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:23:57.937 20:41:16 -- nvmf/common.sh@469 -- # nvmfpid=3601704 00:23:57.937 20:41:16 -- nvmf/common.sh@470 -- # waitforlisten 3601704 00:23:57.937 20:41:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.937 20:41:16 -- common/autotest_common.sh@819 -- # '[' -z 3601704 ']' 00:23:57.937 20:41:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.937 20:41:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:57.937 20:41:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.937 20:41:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:57.937 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:23:57.937 [2024-04-26 20:41:16.227587] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:57.937 [2024-04-26 20:41:16.227720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.198 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.198 [2024-04-26 20:41:16.366100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.198 [2024-04-26 20:41:16.461193] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:58.198 [2024-04-26 20:41:16.461405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.198 [2024-04-26 20:41:16.461421] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.198 [2024-04-26 20:41:16.461433] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.198 [2024-04-26 20:41:16.461510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.198 [2024-04-26 20:41:16.461526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.198 [2024-04-26 20:41:16.461576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.198 [2024-04-26 20:41:16.461584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.768 20:41:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:58.768 20:41:16 -- common/autotest_common.sh@852 -- # return 0 00:23:58.768 20:41:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:58.768 20:41:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:58.768 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 20:41:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.768 20:41:16 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:58.768 20:41:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 [2024-04-26 20:41:16.944729] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.768 20:41:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.768 20:41:16 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:58.768 20:41:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 Malloc0 00:23:58.768 20:41:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.768 20:41:16 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:58.768 20:41:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 20:41:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.768 20:41:16 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.768 20:41:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 20:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.768 20:41:17 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.768 20:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 [2024-04-26 20:41:17.010042] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.768 20:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.768 20:41:17 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:58.768 test case1: single bdev can't be used in multiple subsystems 00:23:58.768 20:41:17 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:58.768 20:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 20:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.768 20:41:17 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:58.768 20:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 20:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.768 20:41:17 -- target/nmic.sh@28 -- # nmic_status=0 00:23:58.768 20:41:17 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:58.768 20:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 [2024-04-26 20:41:17.033769] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:58.768 [2024-04-26 20:41:17.033798] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:58.768 [2024-04-26 20:41:17.033817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:58.768 request: 00:23:58.768 { 00:23:58.768 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.768 "namespace": { 00:23:58.768 "bdev_name": "Malloc0" 00:23:58.768 }, 00:23:58.768 "method": "nvmf_subsystem_add_ns", 00:23:58.768 "req_id": 1 00:23:58.768 } 00:23:58.768 Got JSON-RPC error response 00:23:58.768 response: 00:23:58.768 { 00:23:58.768 "code": -32602, 00:23:58.768 "message": "Invalid parameters" 00:23:58.768 } 00:23:58.768 20:41:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:58.768 20:41:17 -- target/nmic.sh@29 -- # nmic_status=1 00:23:58.768 20:41:17 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:58.768 20:41:17 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:58.768 Adding namespace failed - expected result. 00:23:58.768 20:41:17 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:58.768 test case2: host connect to nvmf target in multiple paths 00:23:58.768 20:41:17 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.768 20:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.768 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:58.768 [2024-04-26 20:41:17.041890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.768 20:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.768 20:41:17 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:00.155 20:41:18 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:24:02.061 20:41:19 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:24:02.061 20:41:19 -- common/autotest_common.sh@1177 -- # local i=0 00:24:02.061 20:41:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.061 20:41:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:02.061 20:41:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:03.965 20:41:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:03.965 20:41:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:03.965 20:41:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:24:03.965 20:41:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:03.965 20:41:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:03.965 20:41:21 -- common/autotest_common.sh@1187 -- # return 0 00:24:03.965 20:41:21 -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:03.965 [global] 00:24:03.965 thread=1 00:24:03.965 invalidate=1 00:24:03.965 rw=write 00:24:03.965 time_based=1 00:24:03.965 runtime=1 00:24:03.965 ioengine=libaio 00:24:03.965 direct=1 00:24:03.965 bs=4096 00:24:03.965 iodepth=1 00:24:03.965 norandommap=0 00:24:03.965 numjobs=1 00:24:03.965 00:24:03.965 verify_dump=1 00:24:03.965 verify_backlog=512 00:24:03.965 verify_state_save=0 00:24:03.965 do_verify=1 00:24:03.965 verify=crc32c-intel 00:24:03.965 [job0] 00:24:03.965 filename=/dev/nvme0n1 00:24:03.965 Could not set queue depth (nvme0n1) 00:24:04.225 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:04.225 fio-3.35 00:24:04.225 Starting 1 thread 00:24:05.162 00:24:05.162 job0: (groupid=0, jobs=1): err= 0: pid=3603098: Fri Apr 26 20:41:23 2024 00:24:05.162 read: IOPS=1567, BW=6270KiB/s (6420kB/s)(6276KiB/1001msec) 00:24:05.162 slat (nsec): min=4643, max=50448, avg=6145.83, stdev=1417.35 00:24:05.162 clat (usec): min=244, max=422, avg=318.30, stdev=28.09 00:24:05.162 lat (usec): min=251, max=429, avg=324.45, stdev=28.11 00:24:05.162 clat percentiles (usec): 00:24:05.162 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:24:05.162 | 30.00th=[ 306], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:24:05.162 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 367], 95.00th=[ 396], 00:24:05.162 | 99.00th=[ 408], 99.50th=[ 408], 99.90th=[ 420], 99.95th=[ 424], 00:24:05.162 | 99.99th=[ 424] 00:24:05.162 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:05.162 slat (usec): min=6, max=109, avg= 9.13, stdev= 5.19 00:24:05.162 clat (usec): min=166, max=861, avg=227.11, stdev=42.10 00:24:05.162 lat (usec): min=174, max=971, avg=236.24, stdev=45.69 00:24:05.162 clat percentiles (usec): 00:24:05.162 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:24:05.162 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:24:05.162 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 251], 95.00th=[ 306], 00:24:05.162 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 578], 99.95th=[ 586], 00:24:05.162 | 99.99th=[ 865] 00:24:05.162 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:24:05.162 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:05.162 lat (usec) : 250=50.93%, 500=48.82%, 750=0.22%, 1000=0.03% 00:24:05.162 cpu : usr=2.20%, sys=3.70%, ctx=3618, majf=0, minf=1 00:24:05.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:05.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.162 issued rwts: total=1569,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:05.162 00:24:05.162 Run status group 0 (all jobs): 00:24:05.162 READ: bw=6270KiB/s (6420kB/s), 6270KiB/s-6270KiB/s (6420kB/s-6420kB/s), io=6276KiB (6427kB), run=1001-1001msec 00:24:05.162 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:24:05.162 00:24:05.162 Disk stats (read/write): 00:24:05.162 nvme0n1: ios=1586/1637, merge=0/0, ticks=738/366, in_queue=1104, util=96.59% 00:24:05.162 20:41:23 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:05.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:05.732 20:41:23 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:05.732 20:41:23 -- common/autotest_common.sh@1198 -- # local i=0 00:24:05.732 20:41:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:05.732 20:41:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:05.732 20:41:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:05.732 20:41:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:05.732 20:41:23 -- common/autotest_common.sh@1210 -- # return 0 00:24:05.732 20:41:23 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:05.732 20:41:23 -- target/nmic.sh@53 -- # nvmftestfini 00:24:05.732 20:41:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:05.732 20:41:23 -- nvmf/common.sh@116 -- # sync 00:24:05.732 20:41:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:05.732 20:41:23 -- nvmf/common.sh@119 -- # set +e 00:24:05.732 20:41:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:05.732 20:41:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:05.732 rmmod nvme_tcp 00:24:05.732 rmmod nvme_fabrics 00:24:05.732 rmmod nvme_keyring 00:24:05.732 20:41:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:05.732 20:41:23 -- nvmf/common.sh@123 -- # set -e 00:24:05.732 20:41:23 -- nvmf/common.sh@124 -- # return 0 00:24:05.732 20:41:23 -- nvmf/common.sh@477 -- # '[' -n 3601704 ']' 00:24:05.732 20:41:23 -- nvmf/common.sh@478 -- # killprocess 3601704 00:24:05.732 20:41:23 -- common/autotest_common.sh@926 -- # '[' -z 3601704 ']' 00:24:05.732 20:41:23 -- common/autotest_common.sh@930 -- # kill -0 3601704 00:24:05.732 20:41:23 -- common/autotest_common.sh@931 -- # uname 00:24:05.732 20:41:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:05.732 20:41:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3601704 00:24:05.732 20:41:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:05.732 20:41:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:05.732 20:41:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3601704' 00:24:05.732 killing process with pid 3601704 00:24:05.732 20:41:23 -- common/autotest_common.sh@945 -- # kill 3601704 00:24:05.732 20:41:23 -- common/autotest_common.sh@950 -- # wait 3601704 00:24:06.300 20:41:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:06.300 20:41:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:06.300 20:41:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:06.300 20:41:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.300 20:41:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:06.300 20:41:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.300 20:41:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.300 20:41:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.202 20:41:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:08.202 00:24:08.202 real 0m16.128s 00:24:08.202 user 0m44.618s 00:24:08.202 sys 0m5.044s 00:24:08.202 20:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.202 20:41:26 -- common/autotest_common.sh@10 -- # set +x 00:24:08.202 ************************************ 00:24:08.202 END TEST nvmf_nmic 00:24:08.202 ************************************ 00:24:08.460 20:41:26 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:24:08.460 20:41:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:08.460 20:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:08.460 20:41:26 -- common/autotest_common.sh@10 -- # set +x 00:24:08.460 ************************************ 00:24:08.460 START TEST nvmf_fio_target 00:24:08.460 ************************************ 00:24:08.460 20:41:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:24:08.460 * Looking for test storage... 00:24:08.460 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:08.460 20:41:26 -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.460 20:41:26 -- nvmf/common.sh@7 -- # uname -s 00:24:08.460 20:41:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.460 20:41:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.460 20:41:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.460 20:41:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.460 20:41:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.460 20:41:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.460 20:41:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.460 20:41:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.460 20:41:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.460 20:41:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.460 20:41:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:08.460 20:41:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:08.460 20:41:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.460 20:41:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.460 20:41:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:08.460 20:41:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:08.460 20:41:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.460 20:41:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.460 20:41:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.460 20:41:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.460 20:41:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.460 20:41:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.460 20:41:26 -- paths/export.sh@5 -- # export PATH 00:24:08.460 20:41:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.460 20:41:26 -- nvmf/common.sh@46 -- # : 0 00:24:08.460 20:41:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:08.460 20:41:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:08.460 20:41:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:08.460 20:41:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.460 20:41:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.460 20:41:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:08.460 20:41:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:08.460 20:41:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:08.460 20:41:26 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.460 20:41:26 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.460 20:41:26 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:24:08.460 20:41:26 -- target/fio.sh@16 -- # nvmftestinit 00:24:08.460 20:41:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:08.460 20:41:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.460 20:41:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:08.460 20:41:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:08.460 20:41:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:08.460 20:41:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.460 20:41:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.460 20:41:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.460 20:41:26 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:08.460 20:41:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:08.460 20:41:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:08.460 20:41:26 -- common/autotest_common.sh@10 -- # set +x 00:24:13.736 20:41:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:13.736 20:41:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:13.737 20:41:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:13.737 20:41:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:13.737 20:41:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:13.737 20:41:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:13.737 20:41:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:13.737 20:41:32 -- nvmf/common.sh@294 -- # net_devs=() 00:24:13.737 20:41:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:13.737 20:41:32 -- nvmf/common.sh@295 -- # e810=() 00:24:13.737 20:41:32 -- nvmf/common.sh@295 -- # local -ga e810 00:24:13.737 20:41:32 -- nvmf/common.sh@296 -- # x722=() 00:24:13.737 20:41:32 -- nvmf/common.sh@296 -- # local -ga x722 00:24:13.737 20:41:32 -- nvmf/common.sh@297 -- # mlx=() 00:24:13.737 20:41:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:13.737 20:41:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.737 20:41:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:13.737 20:41:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:13.737 20:41:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:13.737 20:41:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:13.737 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:13.737 20:41:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:13.737 20:41:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:13.737 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:13.737 20:41:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:13.737 20:41:32 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:13.737 20:41:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.737 20:41:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:13.737 20:41:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.737 20:41:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:13.737 Found net devices under 0000:27:00.0: cvl_0_0 00:24:13.737 20:41:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.737 20:41:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:13.737 20:41:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.737 20:41:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:13.737 20:41:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.737 20:41:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:13.737 Found net devices under 0000:27:00.1: cvl_0_1 00:24:13.737 20:41:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.737 20:41:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:13.737 20:41:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:13.737 20:41:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:13.737 20:41:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:13.737 20:41:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.737 20:41:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.737 20:41:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.737 20:41:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:13.737 20:41:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.737 20:41:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.737 20:41:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:13.737 20:41:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.737 20:41:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.737 20:41:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:13.737 20:41:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:13.997 20:41:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.997 20:41:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.997 20:41:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.997 20:41:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.997 20:41:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:13.997 20:41:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.997 20:41:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.997 20:41:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.997 20:41:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:13.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:24:13.997 00:24:13.997 --- 10.0.0.2 ping statistics --- 00:24:13.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.997 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:24:13.997 20:41:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:24:13.997 00:24:13.997 --- 10.0.0.1 ping statistics --- 00:24:13.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.997 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:24:13.997 20:41:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.998 20:41:32 -- nvmf/common.sh@410 -- # return 0 00:24:13.998 20:41:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:13.998 20:41:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.998 20:41:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:13.998 20:41:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:13.998 20:41:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.998 20:41:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:13.998 20:41:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:13.998 20:41:32 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:24:13.998 20:41:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:13.998 20:41:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:13.998 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.258 20:41:32 -- nvmf/common.sh@469 -- # nvmfpid=3607505 00:24:14.258 20:41:32 -- nvmf/common.sh@470 -- # waitforlisten 3607505 00:24:14.258 20:41:32 -- common/autotest_common.sh@819 -- # '[' -z 3607505 ']' 00:24:14.258 20:41:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.258 20:41:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:14.258 20:41:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.258 20:41:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:14.258 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.258 20:41:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.258 [2024-04-26 20:41:32.422648] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:14.258 [2024-04-26 20:41:32.422766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.258 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.258 [2024-04-26 20:41:32.548567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.519 [2024-04-26 20:41:32.647136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:14.519 [2024-04-26 20:41:32.647331] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.519 [2024-04-26 20:41:32.647345] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.519 [2024-04-26 20:41:32.647359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.519 [2024-04-26 20:41:32.647441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.519 [2024-04-26 20:41:32.647577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.519 [2024-04-26 20:41:32.647686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.519 [2024-04-26 20:41:32.647696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.091 20:41:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:15.091 20:41:33 -- common/autotest_common.sh@852 -- # return 0 00:24:15.091 20:41:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:15.091 20:41:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:15.091 20:41:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.091 20:41:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.091 20:41:33 -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:15.091 [2024-04-26 20:41:33.302669] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.091 20:41:33 -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:15.352 20:41:33 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:24:15.352 20:41:33 -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:15.352 20:41:33 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:24:15.352 20:41:33 -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:15.613 20:41:33 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:24:15.613 20:41:33 -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:15.896 20:41:34 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:24:15.896 20:41:34 -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:24:15.896 20:41:34 -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:16.210 20:41:34 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:24:16.210 20:41:34 -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:16.471 20:41:34 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:24:16.471 20:41:34 -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:16.471 20:41:34 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:24:16.471 20:41:34 -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:24:16.731 20:41:34 -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:16.991 20:41:35 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:24:16.991 20:41:35 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.991 20:41:35 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:24:16.991 20:41:35 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:17.255 20:41:35 -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.255 [2024-04-26 20:41:35.509477] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.255 20:41:35 -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:24:17.516 20:41:35 -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:24:17.516 20:41:35 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:19.426 20:41:37 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:24:19.426 20:41:37 -- common/autotest_common.sh@1177 -- # local i=0 00:24:19.426 20:41:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:19.426 20:41:37 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:24:19.426 20:41:37 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:24:19.426 20:41:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:21.339 20:41:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:21.339 20:41:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:21.339 20:41:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:24:21.339 20:41:39 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:24:21.339 20:41:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.339 20:41:39 -- common/autotest_common.sh@1187 -- # return 0 00:24:21.339 20:41:39 -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:21.339 [global] 00:24:21.339 thread=1 00:24:21.339 invalidate=1 00:24:21.339 rw=write 00:24:21.339 time_based=1 00:24:21.339 runtime=1 00:24:21.339 ioengine=libaio 00:24:21.339 direct=1 00:24:21.339 bs=4096 00:24:21.339 iodepth=1 00:24:21.339 norandommap=0 00:24:21.339 numjobs=1 00:24:21.339 00:24:21.339 verify_dump=1 00:24:21.339 verify_backlog=512 00:24:21.339 verify_state_save=0 00:24:21.339 do_verify=1 00:24:21.339 verify=crc32c-intel 00:24:21.339 [job0] 00:24:21.339 filename=/dev/nvme0n1 00:24:21.339 [job1] 00:24:21.339 filename=/dev/nvme0n2 00:24:21.339 [job2] 00:24:21.339 filename=/dev/nvme0n3 00:24:21.339 [job3] 00:24:21.339 filename=/dev/nvme0n4 00:24:21.339 Could not set queue depth (nvme0n1) 00:24:21.339 Could not set queue depth (nvme0n2) 00:24:21.339 Could not set queue depth (nvme0n3) 00:24:21.339 Could not set queue depth (nvme0n4) 00:24:21.597 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:21.597 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:21.597 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:21.597 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:21.597 fio-3.35 00:24:21.597 Starting 4 threads 00:24:22.985 00:24:22.985 job0: (groupid=0, jobs=1): err= 0: pid=3609054: Fri Apr 26 20:41:40 2024 00:24:22.985 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:24:22.985 slat (nsec): min=6022, max=41372, avg=31699.00, stdev=10981.36 00:24:22.985 clat (usec): min=41819, max=42932, avg=42025.94, stdev=258.07 00:24:22.985 lat (usec): min=41860, max=42964, avg=42057.64, stdev=254.29 00:24:22.985 clat percentiles (usec): 00:24:22.985 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:24:22.985 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:24:22.985 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:24:22.985 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:24:22.985 | 99.99th=[42730] 00:24:22.985 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:24:22.985 slat (nsec): min=4682, max=89811, avg=6221.83, stdev=4051.92 00:24:22.985 clat (usec): min=148, max=705, avg=225.90, stdev=51.98 00:24:22.985 lat (usec): min=156, max=712, avg=232.12, stdev=53.29 00:24:22.985 clat percentiles (usec): 00:24:22.985 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 200], 00:24:22.985 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:24:22.985 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 277], 00:24:22.985 | 99.00th=[ 498], 99.50th=[ 652], 99.90th=[ 709], 99.95th=[ 709], 00:24:22.985 | 99.99th=[ 709] 00:24:22.985 bw ( KiB/s): min= 4096, max= 4096, per=29.20%, avg=4096.00, stdev= 0.00, samples=1 00:24:22.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:24:22.985 lat (usec) : 250=82.36%, 500=12.76%, 750=0.94% 00:24:22.985 lat (msec) : 50=3.94% 00:24:22.985 cpu : usr=0.10%, sys=0.30%, ctx=534, majf=0, minf=1 00:24:22.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:22.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.985 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:22.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:22.985 job1: (groupid=0, jobs=1): err= 0: pid=3609055: Fri Apr 26 20:41:40 2024 00:24:22.985 read: IOPS=1703, BW=6813KiB/s (6977kB/s)(6820KiB/1001msec) 00:24:22.985 slat (nsec): min=3787, max=51897, avg=12542.12, stdev=9634.74 00:24:22.985 clat (usec): min=198, max=1686, avg=325.52, stdev=75.29 00:24:22.985 lat (usec): min=203, max=1697, avg=338.06, stdev=80.66 00:24:22.985 clat percentiles (usec): 00:24:22.985 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 269], 00:24:22.985 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:24:22.985 | 70.00th=[ 371], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 433], 00:24:22.985 | 99.00th=[ 457], 99.50th=[ 469], 99.90th=[ 562], 99.95th=[ 1680], 00:24:22.985 | 99.99th=[ 1680] 00:24:22.985 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:22.985 slat (nsec): min=4008, max=72193, avg=10376.71, stdev=8969.43 00:24:22.985 clat (usec): min=112, max=3585, avg=190.14, stdev=118.37 00:24:22.985 lat (usec): min=119, max=3590, avg=200.52, stdev=120.74 00:24:22.985 clat percentiles (usec): 00:24:22.985 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 143], 00:24:22.985 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:24:22.985 | 70.00th=[ 190], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 289], 00:24:22.985 | 99.00th=[ 355], 99.50th=[ 392], 99.90th=[ 2474], 99.95th=[ 2573], 00:24:22.985 | 99.99th=[ 3589] 00:24:22.985 bw ( KiB/s): min= 8192, max= 8192, per=58.40%, avg=8192.00, stdev= 0.00, samples=1 00:24:22.985 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:22.985 lat (usec) : 250=52.86%, 500=46.87%, 750=0.16% 00:24:22.985 lat (msec) : 2=0.03%, 4=0.08% 00:24:22.985 cpu : usr=2.20%, sys=4.10%, ctx=3754, majf=0, minf=1 00:24:22.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:22.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.985 issued rwts: total=1705,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:22.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:22.985 job2: (groupid=0, jobs=1): err= 0: pid=3609056: Fri Apr 26 20:41:40 2024 00:24:22.985 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:24:22.985 slat (nsec): min=7378, max=41545, avg=33988.14, stdev=7846.17 00:24:22.985 clat (usec): min=40869, max=41026, avg=40947.73, stdev=38.14 00:24:22.985 lat (usec): min=40877, max=41059, avg=40981.72, stdev=40.82 00:24:22.985 clat percentiles (usec): 00:24:22.985 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:24:22.985 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:24:22.985 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:24:22.985 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:24:22.985 | 99.99th=[41157] 00:24:22.985 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:24:22.985 slat (nsec): min=5110, max=69000, avg=7944.35, stdev=3274.73 00:24:22.985 clat (usec): min=164, max=1246, avg=224.59, stdev=67.16 00:24:22.985 lat (usec): min=171, max=1252, avg=232.54, stdev=67.63 00:24:22.985 clat percentiles (usec): 00:24:22.985 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:24:22.985 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:24:22.985 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 273], 00:24:22.985 | 99.00th=[ 523], 99.50th=[ 644], 99.90th=[ 1254], 99.95th=[ 1254], 00:24:22.985 | 99.99th=[ 1254] 00:24:22.985 bw ( KiB/s): min= 4096, max= 4096, per=29.20%, avg=4096.00, stdev= 0.00, samples=1 00:24:22.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:24:22.985 lat (usec) : 250=85.02%, 500=9.55%, 750=1.12% 00:24:22.985 lat (msec) : 2=0.19%, 50=4.12% 00:24:22.985 cpu : usr=0.29%, sys=0.29%, ctx=535, majf=0, minf=1 00:24:22.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:22.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.985 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:22.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:22.985 job3: (groupid=0, jobs=1): err= 0: pid=3609057: Fri Apr 26 20:41:40 2024 00:24:22.985 read: IOPS=34, BW=137KiB/s (140kB/s)(140KiB/1021msec) 00:24:22.985 slat (nsec): min=5451, max=40300, avg=22898.17, stdev=14536.54 00:24:22.985 clat (usec): min=321, max=42955, avg=25312.06, stdev=20648.10 00:24:22.985 lat (usec): min=328, max=42989, avg=25334.96, stdev=20661.44 00:24:22.985 clat percentiles (usec): 00:24:22.985 | 1.00th=[ 322], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 375], 00:24:22.985 | 30.00th=[ 396], 40.00th=[ 545], 50.00th=[41681], 60.00th=[41681], 00:24:22.985 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:24:22.985 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:24:22.985 | 99.99th=[42730] 00:24:22.985 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:24:22.985 slat (nsec): min=4460, max=54740, avg=7955.80, stdev=3721.01 00:24:22.985 clat (usec): min=160, max=3511, avg=252.39, stdev=262.76 00:24:22.985 lat (usec): min=166, max=3516, avg=260.35, stdev=262.85 00:24:22.985 clat percentiles (usec): 00:24:22.985 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 200], 00:24:22.985 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 235], 60.00th=[ 241], 00:24:22.985 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 273], 00:24:22.985 | 99.00th=[ 627], 99.50th=[ 3032], 99.90th=[ 3523], 99.95th=[ 3523], 00:24:22.985 | 99.99th=[ 3523] 00:24:22.985 bw ( KiB/s): min= 4096, max= 4096, per=29.20%, avg=4096.00, stdev= 0.00, samples=1 00:24:22.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:24:22.985 lat (usec) : 250=80.62%, 500=14.08%, 750=0.73% 00:24:22.985 lat (msec) : 4=0.73%, 50=3.84% 00:24:22.985 cpu : usr=0.20%, sys=0.39%, ctx=547, majf=0, minf=1 00:24:22.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:22.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.985 issued rwts: total=35,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:22.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:22.985 00:24:22.985 Run status group 0 (all jobs): 00:24:22.985 READ: bw=6978KiB/s (7146kB/s), 83.7KiB/s-6813KiB/s (85.8kB/s-6977kB/s), io=7132KiB (7303kB), run=1001-1022msec 00:24:22.985 WRITE: bw=13.7MiB/s (14.4MB/s), 2004KiB/s-8184KiB/s (2052kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1022msec 00:24:22.985 00:24:22.985 Disk stats (read/write): 00:24:22.985 nvme0n1: ios=66/512, merge=0/0, ticks=708/113, in_queue=821, util=84.07% 00:24:22.985 nvme0n2: ios=1335/1536, merge=0/0, ticks=857/311, in_queue=1168, util=87.98% 00:24:22.985 nvme0n3: ios=38/512, merge=0/0, ticks=1578/112, in_queue=1690, util=94.95% 00:24:22.985 nvme0n4: ios=87/512, merge=0/0, ticks=753/129, in_queue=882, util=95.64% 00:24:22.985 20:41:40 -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:24:22.985 [global] 00:24:22.985 thread=1 00:24:22.985 invalidate=1 00:24:22.985 rw=randwrite 00:24:22.985 time_based=1 00:24:22.985 runtime=1 00:24:22.985 ioengine=libaio 00:24:22.985 direct=1 00:24:22.985 bs=4096 00:24:22.985 iodepth=1 00:24:22.985 norandommap=0 00:24:22.985 numjobs=1 00:24:22.985 00:24:22.985 verify_dump=1 00:24:22.985 verify_backlog=512 00:24:22.985 verify_state_save=0 00:24:22.985 do_verify=1 00:24:22.985 verify=crc32c-intel 00:24:22.985 [job0] 00:24:22.985 filename=/dev/nvme0n1 00:24:22.985 [job1] 00:24:22.985 filename=/dev/nvme0n2 00:24:22.985 [job2] 00:24:22.985 filename=/dev/nvme0n3 00:24:22.985 [job3] 00:24:22.985 filename=/dev/nvme0n4 00:24:22.985 Could not set queue depth (nvme0n1) 00:24:22.985 Could not set queue depth (nvme0n2) 00:24:22.985 Could not set queue depth (nvme0n3) 00:24:22.985 Could not set queue depth (nvme0n4) 00:24:23.244 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:23.244 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:23.244 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:23.244 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:23.244 fio-3.35 00:24:23.244 Starting 4 threads 00:24:24.626 00:24:24.626 job0: (groupid=0, jobs=1): err= 0: pid=3609548: Fri Apr 26 20:41:42 2024 00:24:24.626 read: IOPS=1619, BW=6478KiB/s (6633kB/s)(6484KiB/1001msec) 00:24:24.626 slat (nsec): min=3134, max=45383, avg=6350.78, stdev=2895.40 00:24:24.626 clat (usec): min=279, max=628, avg=355.19, stdev=39.60 00:24:24.626 lat (usec): min=291, max=633, avg=361.54, stdev=40.18 00:24:24.626 clat percentiles (usec): 00:24:24.626 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 330], 00:24:24.626 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:24:24.626 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 416], 95.00th=[ 441], 00:24:24.626 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 603], 99.95th=[ 627], 00:24:24.626 | 99.99th=[ 627] 00:24:24.626 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:24.626 slat (nsec): min=4721, max=55895, avg=7357.34, stdev=1868.42 00:24:24.626 clat (usec): min=135, max=650, avg=191.38, stdev=36.55 00:24:24.626 lat (usec): min=141, max=706, avg=198.74, stdev=37.73 00:24:24.626 clat percentiles (usec): 00:24:24.626 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:24:24.626 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 190], 00:24:24.626 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 253], 00:24:24.626 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[ 433], 99.95th=[ 594], 00:24:24.626 | 99.99th=[ 652] 00:24:24.626 bw ( KiB/s): min= 8192, max= 8192, per=36.73%, avg=8192.00, stdev= 0.00, samples=1 00:24:24.626 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:24.626 lat (usec) : 250=52.49%, 500=47.26%, 750=0.25% 00:24:24.626 cpu : usr=2.40%, sys=2.90%, ctx=3672, majf=0, minf=1 00:24:24.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.627 issued rwts: total=1621,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:24.627 job1: (groupid=0, jobs=1): err= 0: pid=3609555: Fri Apr 26 20:41:42 2024 00:24:24.627 read: IOPS=25, BW=103KiB/s (106kB/s)(104KiB/1009msec) 00:24:24.627 slat (nsec): min=6660, max=32169, avg=25837.92, stdev=10131.78 00:24:24.627 clat (usec): min=300, max=43006, avg=34156.72, stdev=16801.97 00:24:24.627 lat (usec): min=312, max=43037, avg=34182.56, stdev=16810.70 00:24:24.627 clat percentiles (usec): 00:24:24.627 | 1.00th=[ 302], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[41681], 00:24:24.627 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:24:24.627 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:24:24.627 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:24:24.627 | 99.99th=[43254] 00:24:24.627 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:24:24.627 slat (nsec): min=5375, max=48646, avg=7796.58, stdev=2937.60 00:24:24.627 clat (usec): min=153, max=867, avg=223.92, stdev=44.59 00:24:24.627 lat (usec): min=162, max=915, avg=231.71, stdev=46.35 00:24:24.627 clat percentiles (usec): 00:24:24.627 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 192], 00:24:24.627 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 223], 60.00th=[ 235], 00:24:24.627 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:24:24.627 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 865], 99.95th=[ 865], 00:24:24.627 | 99.99th=[ 865] 00:24:24.627 bw ( KiB/s): min= 4096, max= 4096, per=18.36%, avg=4096.00, stdev= 0.00, samples=1 00:24:24.627 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:24:24.627 lat (usec) : 250=76.02%, 500=19.70%, 750=0.19%, 1000=0.19% 00:24:24.627 lat (msec) : 50=3.90% 00:24:24.627 cpu : usr=0.30%, sys=0.50%, ctx=538, majf=0, minf=1 00:24:24.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.627 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:24.627 job2: (groupid=0, jobs=1): err= 0: pid=3609571: Fri Apr 26 20:41:42 2024 00:24:24.627 read: IOPS=1885, BW=7540KiB/s (7721kB/s)(7548KiB/1001msec) 00:24:24.627 slat (nsec): min=3915, max=34186, avg=6746.44, stdev=2107.47 00:24:24.627 clat (usec): min=206, max=532, avg=312.86, stdev=71.52 00:24:24.627 lat (usec): min=212, max=546, avg=319.60, stdev=72.27 00:24:24.627 clat percentiles (usec): 00:24:24.627 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:24:24.627 | 30.00th=[ 253], 40.00th=[ 273], 50.00th=[ 314], 60.00th=[ 334], 00:24:24.627 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 433], 00:24:24.627 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 529], 99.95th=[ 537], 00:24:24.627 | 99.99th=[ 537] 00:24:24.627 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:24.627 slat (nsec): min=4782, max=47031, avg=7231.56, stdev=1954.89 00:24:24.627 clat (usec): min=119, max=914, avg=182.75, stdev=37.38 00:24:24.627 lat (usec): min=126, max=961, avg=189.98, stdev=38.34 00:24:24.627 clat percentiles (usec): 00:24:24.627 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 149], 00:24:24.627 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:24:24.627 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 235], 00:24:24.627 | 99.00th=[ 269], 99.50th=[ 310], 99.90th=[ 400], 99.95th=[ 449], 00:24:24.627 | 99.99th=[ 914] 00:24:24.627 bw ( KiB/s): min= 8192, max= 8192, per=36.73%, avg=8192.00, stdev= 0.00, samples=1 00:24:24.627 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:24.627 lat (usec) : 250=64.68%, 500=35.22%, 750=0.08%, 1000=0.03% 00:24:24.627 cpu : usr=1.90%, sys=2.50%, ctx=3935, majf=0, minf=1 00:24:24.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.627 issued rwts: total=1887,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:24.627 job3: (groupid=0, jobs=1): err= 0: pid=3609579: Fri Apr 26 20:41:42 2024 00:24:24.627 read: IOPS=578, BW=2313KiB/s (2368kB/s)(2336KiB/1010msec) 00:24:24.627 slat (nsec): min=3893, max=31500, avg=8115.46, stdev=4802.64 00:24:24.627 clat (usec): min=309, max=42081, avg=1332.40, stdev=6105.29 00:24:24.627 lat (usec): min=316, max=42089, avg=1340.51, stdev=6105.94 00:24:24.627 clat percentiles (usec): 00:24:24.627 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 351], 20.00th=[ 367], 00:24:24.627 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 416], 00:24:24.627 | 70.00th=[ 433], 80.00th=[ 453], 90.00th=[ 482], 95.00th=[ 510], 00:24:24.627 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:24:24.627 | 99.99th=[42206] 00:24:24.627 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:24:24.627 slat (nsec): min=5890, max=46706, avg=8164.47, stdev=2169.09 00:24:24.627 clat (usec): min=145, max=771, avg=210.31, stdev=35.45 00:24:24.627 lat (usec): min=153, max=818, avg=218.48, stdev=36.39 00:24:24.627 clat percentiles (usec): 00:24:24.627 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:24:24.627 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 215], 00:24:24.627 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 262], 00:24:24.627 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 474], 99.95th=[ 775], 00:24:24.627 | 99.99th=[ 775] 00:24:24.627 bw ( KiB/s): min= 2648, max= 5544, per=18.36%, avg=4096.00, stdev=2047.78, samples=2 00:24:24.627 iops : min= 662, max= 1386, avg=1024.00, stdev=511.95, samples=2 00:24:24.627 lat (usec) : 250=57.65%, 500=40.11%, 750=1.24%, 1000=0.12% 00:24:24.627 lat (msec) : 4=0.06%, 50=0.81% 00:24:24.627 cpu : usr=0.69%, sys=1.29%, ctx=1609, majf=0, minf=1 00:24:24.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.627 issued rwts: total=584,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:24.627 00:24:24.627 Run status group 0 (all jobs): 00:24:24.627 READ: bw=15.9MiB/s (16.7MB/s), 103KiB/s-7540KiB/s (106kB/s-7721kB/s), io=16.1MiB (16.9MB), run=1001-1010msec 00:24:24.627 WRITE: bw=21.8MiB/s (22.8MB/s), 2030KiB/s-8184KiB/s (2078kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1010msec 00:24:24.627 00:24:24.627 Disk stats (read/write): 00:24:24.627 nvme0n1: ios=1457/1536, merge=0/0, ticks=948/303, in_queue=1251, util=96.29% 00:24:24.627 nvme0n2: ios=70/512, merge=0/0, ticks=693/110, in_queue=803, util=86.95% 00:24:24.627 nvme0n3: ios=1582/1576, merge=0/0, ticks=569/300, in_queue=869, util=93.99% 00:24:24.627 nvme0n4: ios=620/1024, merge=0/0, ticks=1482/216, in_queue=1698, util=96.27% 00:24:24.627 20:41:42 -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:24:24.627 [global] 00:24:24.627 thread=1 00:24:24.627 invalidate=1 00:24:24.627 rw=write 00:24:24.627 time_based=1 00:24:24.627 runtime=1 00:24:24.627 ioengine=libaio 00:24:24.627 direct=1 00:24:24.627 bs=4096 00:24:24.627 iodepth=128 00:24:24.627 norandommap=0 00:24:24.627 numjobs=1 00:24:24.627 00:24:24.627 verify_dump=1 00:24:24.627 verify_backlog=512 00:24:24.627 verify_state_save=0 00:24:24.627 do_verify=1 00:24:24.627 verify=crc32c-intel 00:24:24.627 [job0] 00:24:24.627 filename=/dev/nvme0n1 00:24:24.627 [job1] 00:24:24.627 filename=/dev/nvme0n2 00:24:24.627 [job2] 00:24:24.627 filename=/dev/nvme0n3 00:24:24.627 [job3] 00:24:24.627 filename=/dev/nvme0n4 00:24:24.627 Could not set queue depth (nvme0n1) 00:24:24.627 Could not set queue depth (nvme0n2) 00:24:24.627 Could not set queue depth (nvme0n3) 00:24:24.627 Could not set queue depth (nvme0n4) 00:24:24.886 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:24.886 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:24.886 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:24.886 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:24.886 fio-3.35 00:24:24.886 Starting 4 threads 00:24:26.259 00:24:26.259 job0: (groupid=0, jobs=1): err= 0: pid=3610116: Fri Apr 26 20:41:44 2024 00:24:26.259 read: IOPS=3359, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1004msec) 00:24:26.259 slat (nsec): min=808, max=24011k, avg=161775.93, stdev=1202302.57 00:24:26.259 clat (usec): min=3581, max=58788, avg=19873.44, stdev=10423.71 00:24:26.259 lat (usec): min=3588, max=58811, avg=20035.21, stdev=10516.69 00:24:26.259 clat percentiles (usec): 00:24:26.259 | 1.00th=[ 7373], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11207], 00:24:26.259 | 30.00th=[15270], 40.00th=[17171], 50.00th=[17695], 60.00th=[17695], 00:24:26.259 | 70.00th=[18482], 80.00th=[25560], 90.00th=[39060], 95.00th=[44303], 00:24:26.259 | 99.00th=[52691], 99.50th=[52691], 99.90th=[55313], 99.95th=[57410], 00:24:26.259 | 99.99th=[58983] 00:24:26.259 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:24:26.259 slat (nsec): min=1553, max=10533k, avg=116648.05, stdev=627945.35 00:24:26.259 clat (usec): min=1440, max=61045, avg=16854.04, stdev=10995.06 00:24:26.259 lat (usec): min=1573, max=61053, avg=16970.68, stdev=11054.48 00:24:26.259 clat percentiles (usec): 00:24:26.259 | 1.00th=[ 2900], 5.00th=[ 6652], 10.00th=[ 8094], 20.00th=[10945], 00:24:26.259 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13566], 60.00th=[14222], 00:24:26.259 | 70.00th=[16712], 80.00th=[21627], 90.00th=[23725], 95.00th=[47449], 00:24:26.259 | 99.00th=[58459], 99.50th=[59507], 99.90th=[61080], 99.95th=[61080], 00:24:26.259 | 99.99th=[61080] 00:24:26.259 bw ( KiB/s): min=12288, max=16384, per=18.17%, avg=14336.00, stdev=2896.31, samples=2 00:24:26.259 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:24:26.259 lat (msec) : 2=0.14%, 4=1.12%, 10=10.00%, 20=62.64%, 50=22.54% 00:24:26.259 lat (msec) : 100=3.55% 00:24:26.259 cpu : usr=1.40%, sys=3.59%, ctx=338, majf=0, minf=1 00:24:26.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:26.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.259 issued rwts: total=3373,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.259 job1: (groupid=0, jobs=1): err= 0: pid=3610129: Fri Apr 26 20:41:44 2024 00:24:26.259 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:24:26.259 slat (nsec): min=1043, max=9491.0k, avg=81623.33, stdev=582807.16 00:24:26.259 clat (usec): min=3401, max=20161, avg=10154.53, stdev=2631.42 00:24:26.259 lat (usec): min=3404, max=20164, avg=10236.15, stdev=2663.46 00:24:26.259 clat percentiles (usec): 00:24:26.259 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 7767], 20.00th=[ 8455], 00:24:26.259 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:24:26.259 | 70.00th=[10552], 80.00th=[11994], 90.00th=[14222], 95.00th=[15664], 00:24:26.259 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19530], 99.95th=[20055], 00:24:26.259 | 99.99th=[20055] 00:24:26.259 write: IOPS=6295, BW=24.6MiB/s (25.8MB/s)(24.8MiB/1010msec); 0 zone resets 00:24:26.259 slat (nsec): min=1775, max=10516k, avg=75739.73, stdev=444935.32 00:24:26.259 clat (usec): min=1190, max=58332, avg=10307.34, stdev=6742.66 00:24:26.259 lat (usec): min=1202, max=58337, avg=10383.08, stdev=6775.06 00:24:26.259 clat percentiles (usec): 00:24:26.259 | 1.00th=[ 2802], 5.00th=[ 4752], 10.00th=[ 5538], 20.00th=[ 6849], 00:24:26.259 | 30.00th=[ 8291], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:24:26.259 | 70.00th=[10421], 80.00th=[11600], 90.00th=[12518], 95.00th=[13960], 00:24:26.259 | 99.00th=[57410], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:24:26.260 | 99.99th=[58459] 00:24:26.260 bw ( KiB/s): min=24560, max=25288, per=31.60%, avg=24924.00, stdev=514.77, samples=2 00:24:26.260 iops : min= 6140, max= 6322, avg=6231.00, stdev=128.69, samples=2 00:24:26.260 lat (msec) : 2=0.02%, 4=2.18%, 10=52.29%, 20=44.07%, 50=0.82% 00:24:26.260 lat (msec) : 100=0.63% 00:24:26.260 cpu : usr=2.08%, sys=3.77%, ctx=716, majf=0, minf=1 00:24:26.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.260 issued rwts: total=6144,6358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.260 job2: (groupid=0, jobs=1): err= 0: pid=3610151: Fri Apr 26 20:41:44 2024 00:24:26.260 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:24:26.260 slat (nsec): min=985, max=10708k, avg=95561.22, stdev=711756.52 00:24:26.260 clat (usec): min=3648, max=23346, avg=11587.28, stdev=3091.87 00:24:26.260 lat (usec): min=3651, max=23348, avg=11682.85, stdev=3139.53 00:24:26.260 clat percentiles (usec): 00:24:26.260 | 1.00th=[ 5735], 5.00th=[ 7832], 10.00th=[ 9110], 20.00th=[ 9634], 00:24:26.260 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:24:26.260 | 70.00th=[11731], 80.00th=[13698], 90.00th=[16450], 95.00th=[18220], 00:24:26.260 | 99.00th=[20579], 99.50th=[21627], 99.90th=[22938], 99.95th=[23462], 00:24:26.260 | 99.99th=[23462] 00:24:26.260 write: IOPS=5845, BW=22.8MiB/s (23.9MB/s)(23.1MiB/1011msec); 0 zone resets 00:24:26.260 slat (nsec): min=1689, max=10196k, avg=76137.19, stdev=447300.32 00:24:26.260 clat (usec): min=2108, max=23343, avg=10657.22, stdev=3016.14 00:24:26.260 lat (usec): min=2110, max=23346, avg=10733.36, stdev=3030.06 00:24:26.260 clat percentiles (usec): 00:24:26.260 | 1.00th=[ 3425], 5.00th=[ 5669], 10.00th=[ 6718], 20.00th=[ 7832], 00:24:26.260 | 30.00th=[ 9372], 40.00th=[10683], 50.00th=[11207], 60.00th=[11338], 00:24:26.260 | 70.00th=[11731], 80.00th=[12911], 90.00th=[14222], 95.00th=[15270], 00:24:26.260 | 99.00th=[18744], 99.50th=[20317], 99.90th=[21627], 99.95th=[22938], 00:24:26.260 | 99.99th=[23462] 00:24:26.260 bw ( KiB/s): min=23104, max=23160, per=29.32%, avg=23132.00, stdev=39.60, samples=2 00:24:26.260 iops : min= 5776, max= 5790, avg=5783.00, stdev= 9.90, samples=2 00:24:26.260 lat (msec) : 4=1.50%, 10=31.24%, 20=65.97%, 50=1.29% 00:24:26.260 cpu : usr=1.78%, sys=2.57%, ctx=694, majf=0, minf=1 00:24:26.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.260 issued rwts: total=5632,5910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.260 job3: (groupid=0, jobs=1): err= 0: pid=3610159: Fri Apr 26 20:41:44 2024 00:24:26.260 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:24:26.260 slat (nsec): min=1025, max=12203k, avg=121752.13, stdev=802516.82 00:24:26.260 clat (usec): min=4713, max=32107, avg=13860.26, stdev=4991.39 00:24:26.260 lat (usec): min=4717, max=32113, avg=13982.01, stdev=5039.12 00:24:26.260 clat percentiles (usec): 00:24:26.260 | 1.00th=[ 7177], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10552], 00:24:26.260 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:24:26.260 | 70.00th=[13698], 80.00th=[16909], 90.00th=[20841], 95.00th=[25822], 00:24:26.260 | 99.00th=[31065], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:24:26.260 | 99.99th=[32113] 00:24:26.260 write: IOPS=4041, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:24:26.260 slat (nsec): min=1918, max=10714k, avg=135025.33, stdev=558314.60 00:24:26.260 clat (usec): min=1230, max=37801, avg=19208.99, stdev=7402.10 00:24:26.260 lat (usec): min=1242, max=37808, avg=19344.02, stdev=7450.42 00:24:26.260 clat percentiles (usec): 00:24:26.260 | 1.00th=[ 2999], 5.00th=[ 7242], 10.00th=[ 9372], 20.00th=[13042], 00:24:26.260 | 30.00th=[13960], 40.00th=[17171], 50.00th=[20841], 60.00th=[22414], 00:24:26.260 | 70.00th=[23200], 80.00th=[23725], 90.00th=[28181], 95.00th=[33162], 00:24:26.260 | 99.00th=[36439], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:24:26.260 | 99.99th=[38011] 00:24:26.260 bw ( KiB/s): min=15048, max=16624, per=20.07%, avg=15836.00, stdev=1114.40, samples=2 00:24:26.260 iops : min= 3762, max= 4156, avg=3959.00, stdev=278.60, samples=2 00:24:26.260 lat (msec) : 2=0.03%, 4=0.72%, 10=10.61%, 20=54.62%, 50=34.03% 00:24:26.260 cpu : usr=2.18%, sys=1.98%, ctx=559, majf=0, minf=1 00:24:26.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.260 issued rwts: total=3584,4086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.260 00:24:26.260 Run status group 0 (all jobs): 00:24:26.260 READ: bw=72.4MiB/s (75.9MB/s), 13.1MiB/s-23.8MiB/s (13.8MB/s-24.9MB/s), io=73.2MiB (76.7MB), run=1004-1011msec 00:24:26.260 WRITE: bw=77.0MiB/s (80.8MB/s), 13.9MiB/s-24.6MiB/s (14.6MB/s-25.8MB/s), io=77.9MiB (81.7MB), run=1004-1011msec 00:24:26.260 00:24:26.260 Disk stats (read/write): 00:24:26.260 nvme0n1: ios=2735/3072, merge=0/0, ticks=24701/26842, in_queue=51543, util=86.27% 00:24:26.260 nvme0n2: ios=5144/5127, merge=0/0, ticks=51962/53253, in_queue=105215, util=88.48% 00:24:26.260 nvme0n3: ios=4665/4927, merge=0/0, ticks=53508/51062, in_queue=104570, util=94.00% 00:24:26.260 nvme0n4: ios=3129/3463, merge=0/0, ticks=41950/62218, in_queue=104168, util=94.25% 00:24:26.260 20:41:44 -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:24:26.260 [global] 00:24:26.260 thread=1 00:24:26.260 invalidate=1 00:24:26.260 rw=randwrite 00:24:26.260 time_based=1 00:24:26.260 runtime=1 00:24:26.260 ioengine=libaio 00:24:26.260 direct=1 00:24:26.260 bs=4096 00:24:26.260 iodepth=128 00:24:26.260 norandommap=0 00:24:26.260 numjobs=1 00:24:26.260 00:24:26.260 verify_dump=1 00:24:26.260 verify_backlog=512 00:24:26.260 verify_state_save=0 00:24:26.260 do_verify=1 00:24:26.260 verify=crc32c-intel 00:24:26.260 [job0] 00:24:26.260 filename=/dev/nvme0n1 00:24:26.260 [job1] 00:24:26.260 filename=/dev/nvme0n2 00:24:26.260 [job2] 00:24:26.260 filename=/dev/nvme0n3 00:24:26.260 [job3] 00:24:26.260 filename=/dev/nvme0n4 00:24:26.260 Could not set queue depth (nvme0n1) 00:24:26.260 Could not set queue depth (nvme0n2) 00:24:26.260 Could not set queue depth (nvme0n3) 00:24:26.260 Could not set queue depth (nvme0n4) 00:24:26.520 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:26.520 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:26.520 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:26.520 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:26.520 fio-3.35 00:24:26.520 Starting 4 threads 00:24:27.899 00:24:27.899 job0: (groupid=0, jobs=1): err= 0: pid=3610703: Fri Apr 26 20:41:45 2024 00:24:27.899 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:24:27.899 slat (nsec): min=949, max=10301k, avg=78809.99, stdev=580287.66 00:24:27.899 clat (usec): min=2872, max=33756, avg=10028.25, stdev=3012.25 00:24:27.899 lat (usec): min=2898, max=33759, avg=10107.06, stdev=3049.21 00:24:27.899 clat percentiles (usec): 00:24:27.899 | 1.00th=[ 6128], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 8160], 00:24:27.899 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9765], 00:24:27.899 | 70.00th=[10814], 80.00th=[11600], 90.00th=[13435], 95.00th=[15270], 00:24:27.899 | 99.00th=[17957], 99.50th=[31327], 99.90th=[33162], 99.95th=[33162], 00:24:27.899 | 99.99th=[33817] 00:24:27.899 write: IOPS=6530, BW=25.5MiB/s (26.8MB/s)(25.7MiB/1006msec); 0 zone resets 00:24:27.899 slat (nsec): min=1606, max=8459.0k, avg=75618.44, stdev=498961.35 00:24:27.899 clat (usec): min=949, max=56110, avg=10048.23, stdev=6610.45 00:24:27.899 lat (usec): min=1208, max=57234, avg=10123.85, stdev=6640.30 00:24:27.899 clat percentiles (usec): 00:24:27.899 | 1.00th=[ 3032], 5.00th=[ 4228], 10.00th=[ 5211], 20.00th=[ 5932], 00:24:27.899 | 30.00th=[ 7504], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[ 9896], 00:24:27.899 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12911], 95.00th=[23725], 00:24:27.899 | 99.00th=[46400], 99.50th=[52167], 99.90th=[55837], 99.95th=[56361], 00:24:27.899 | 99.99th=[56361] 00:24:27.899 bw ( KiB/s): min=24240, max=27296, per=31.78%, avg=25768.00, stdev=2160.92, samples=2 00:24:27.899 iops : min= 6060, max= 6824, avg=6442.00, stdev=540.23, samples=2 00:24:27.899 lat (usec) : 1000=0.01% 00:24:27.899 lat (msec) : 2=0.08%, 4=1.48%, 10=62.39%, 20=32.55%, 50=3.12% 00:24:27.899 lat (msec) : 100=0.37% 00:24:27.899 cpu : usr=3.18%, sys=6.07%, ctx=532, majf=0, minf=1 00:24:27.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:27.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.899 issued rwts: total=6144,6570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.899 job1: (groupid=0, jobs=1): err= 0: pid=3610717: Fri Apr 26 20:41:45 2024 00:24:27.899 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:24:27.899 slat (nsec): min=892, max=12165k, avg=77448.72, stdev=527767.41 00:24:27.899 clat (usec): min=3216, max=25874, avg=10096.44, stdev=2402.21 00:24:27.899 lat (usec): min=3250, max=25879, avg=10173.89, stdev=2431.23 00:24:27.900 clat percentiles (usec): 00:24:27.900 | 1.00th=[ 6390], 5.00th=[ 7242], 10.00th=[ 8029], 20.00th=[ 8717], 00:24:27.900 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:24:27.900 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12518], 95.00th=[13566], 00:24:27.900 | 99.00th=[19268], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:24:27.900 | 99.99th=[25822] 00:24:27.900 write: IOPS=6638, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:24:27.900 slat (nsec): min=1508, max=14527k, avg=73821.61, stdev=525960.81 00:24:27.900 clat (usec): min=321, max=24520, avg=9774.15, stdev=1696.78 00:24:27.900 lat (usec): min=1342, max=24561, avg=9847.97, stdev=1737.41 00:24:27.900 clat percentiles (usec): 00:24:27.900 | 1.00th=[ 5145], 5.00th=[ 7111], 10.00th=[ 7832], 20.00th=[ 9110], 00:24:27.900 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:24:27.900 | 70.00th=[10028], 80.00th=[10290], 90.00th=[11207], 95.00th=[13042], 00:24:27.900 | 99.00th=[15795], 99.50th=[16188], 99.90th=[17171], 99.95th=[18220], 00:24:27.900 | 99.99th=[24511] 00:24:27.900 bw ( KiB/s): min=24624, max=24624, per=30.37%, avg=24624.00, stdev= 0.00, samples=1 00:24:27.900 iops : min= 6156, max= 6156, avg=6156.00, stdev= 0.00, samples=1 00:24:27.900 lat (usec) : 500=0.01% 00:24:27.900 lat (msec) : 2=0.01%, 4=0.06%, 10=64.96%, 20=34.67%, 50=0.29% 00:24:27.900 cpu : usr=2.40%, sys=5.09%, ctx=513, majf=0, minf=1 00:24:27.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:27.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.900 issued rwts: total=6144,6652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.900 job2: (groupid=0, jobs=1): err= 0: pid=3610741: Fri Apr 26 20:41:45 2024 00:24:27.900 read: IOPS=3304, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1006msec) 00:24:27.900 slat (nsec): min=908, max=26814k, avg=151069.34, stdev=1038268.44 00:24:27.900 clat (usec): min=1650, max=121771, avg=16362.32, stdev=14033.86 00:24:27.900 lat (msec): min=4, max=121, avg=16.51, stdev=14.17 00:24:27.900 clat percentiles (msec): 00:24:27.900 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:24:27.900 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 14], 00:24:27.900 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 26], 95.00th=[ 36], 00:24:27.900 | 99.00th=[ 95], 99.50th=[ 106], 99.90th=[ 123], 99.95th=[ 123], 00:24:27.900 | 99.99th=[ 123] 00:24:27.900 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:24:27.900 slat (nsec): min=1635, max=11216k, avg=134819.15, stdev=750533.01 00:24:27.900 clat (msec): min=2, max=121, avg=20.32, stdev=19.24 00:24:27.900 lat (msec): min=2, max=121, avg=20.45, stdev=19.34 00:24:27.900 clat percentiles (msec): 00:24:27.900 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:24:27.900 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:24:27.900 | 70.00th=[ 22], 80.00th=[ 23], 90.00th=[ 46], 95.00th=[ 58], 00:24:27.900 | 99.00th=[ 102], 99.50th=[ 103], 99.90th=[ 105], 99.95th=[ 123], 00:24:27.900 | 99.99th=[ 123] 00:24:27.900 bw ( KiB/s): min=10224, max=18448, per=17.68%, avg=14336.00, stdev=5815.25, samples=2 00:24:27.900 iops : min= 2556, max= 4612, avg=3584.00, stdev=1453.81, samples=2 00:24:27.900 lat (msec) : 2=0.01%, 4=0.61%, 10=18.67%, 20=56.34%, 50=18.25% 00:24:27.900 lat (msec) : 100=5.17%, 250=0.94% 00:24:27.900 cpu : usr=2.09%, sys=3.38%, ctx=357, majf=0, minf=1 00:24:27.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:27.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.900 issued rwts: total=3324,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.900 job3: (groupid=0, jobs=1): err= 0: pid=3610751: Fri Apr 26 20:41:45 2024 00:24:27.900 read: IOPS=3305, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1006msec) 00:24:27.900 slat (nsec): min=922, max=11195k, avg=118945.45, stdev=838689.04 00:24:27.900 clat (usec): min=2823, max=31979, avg=13994.20, stdev=4693.58 00:24:27.900 lat (usec): min=4080, max=31983, avg=14113.14, stdev=4744.97 00:24:27.900 clat percentiles (usec): 00:24:27.900 | 1.00th=[ 6652], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10814], 00:24:27.900 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:24:27.900 | 70.00th=[15664], 80.00th=[17171], 90.00th=[20841], 95.00th=[23987], 00:24:27.900 | 99.00th=[29492], 99.50th=[30540], 99.90th=[31851], 99.95th=[31851], 00:24:27.900 | 99.99th=[31851] 00:24:27.900 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:24:27.900 slat (nsec): min=1616, max=14657k, avg=165197.07, stdev=913981.05 00:24:27.900 clat (usec): min=2364, max=97751, avg=22609.45, stdev=20873.63 00:24:27.900 lat (usec): min=2371, max=97760, avg=22774.65, stdev=21001.90 00:24:27.900 clat percentiles (usec): 00:24:27.900 | 1.00th=[ 3359], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 9634], 00:24:27.900 | 30.00th=[11207], 40.00th=[12518], 50.00th=[14746], 60.00th=[17957], 00:24:27.900 | 70.00th=[22152], 80.00th=[25297], 90.00th=[55837], 95.00th=[76022], 00:24:27.900 | 99.00th=[93848], 99.50th=[95945], 99.90th=[98042], 99.95th=[98042], 00:24:27.900 | 99.99th=[98042] 00:24:27.900 bw ( KiB/s): min=13616, max=15056, per=17.68%, avg=14336.00, stdev=1018.23, samples=2 00:24:27.900 iops : min= 3404, max= 3764, avg=3584.00, stdev=254.56, samples=2 00:24:27.900 lat (msec) : 4=0.83%, 10=15.13%, 20=58.43%, 50=19.77%, 100=5.85% 00:24:27.900 cpu : usr=1.99%, sys=3.78%, ctx=355, majf=0, minf=1 00:24:27.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:27.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.900 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.900 00:24:27.900 Run status group 0 (all jobs): 00:24:27.900 READ: bw=73.5MiB/s (77.1MB/s), 12.9MiB/s-24.0MiB/s (13.5MB/s-25.1MB/s), io=74.0MiB (77.6MB), run=1002-1006msec 00:24:27.900 WRITE: bw=79.2MiB/s (83.0MB/s), 13.9MiB/s-25.9MiB/s (14.6MB/s-27.2MB/s), io=79.6MiB (83.5MB), run=1002-1006msec 00:24:27.900 00:24:27.900 Disk stats (read/write): 00:24:27.900 nvme0n1: ios=5154/5246, merge=0/0, ticks=50324/52984, in_queue=103308, util=99.30% 00:24:27.900 nvme0n2: ios=5171/5471, merge=0/0, ticks=30495/30090, in_queue=60585, util=96.84% 00:24:27.900 nvme0n3: ios=3116/3439, merge=0/0, ticks=41886/56682, in_queue=98568, util=99.26% 00:24:27.900 nvme0n4: ios=2582/2672, merge=0/0, ticks=36043/69043, in_queue=105086, util=95.95% 00:24:27.900 20:41:45 -- target/fio.sh@55 -- # sync 00:24:27.900 20:41:45 -- target/fio.sh@59 -- # fio_pid=3610811 00:24:27.900 20:41:45 -- target/fio.sh@61 -- # sleep 3 00:24:27.900 20:41:45 -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:24:27.900 [global] 00:24:27.900 thread=1 00:24:27.900 invalidate=1 00:24:27.900 rw=read 00:24:27.900 time_based=1 00:24:27.900 runtime=10 00:24:27.900 ioengine=libaio 00:24:27.900 direct=1 00:24:27.900 bs=4096 00:24:27.900 iodepth=1 00:24:27.900 norandommap=1 00:24:27.900 numjobs=1 00:24:27.900 00:24:27.900 [job0] 00:24:27.900 filename=/dev/nvme0n1 00:24:27.900 [job1] 00:24:27.900 filename=/dev/nvme0n2 00:24:27.900 [job2] 00:24:27.900 filename=/dev/nvme0n3 00:24:27.900 [job3] 00:24:27.900 filename=/dev/nvme0n4 00:24:27.900 Could not set queue depth (nvme0n1) 00:24:27.900 Could not set queue depth (nvme0n2) 00:24:27.900 Could not set queue depth (nvme0n3) 00:24:27.900 Could not set queue depth (nvme0n4) 00:24:28.159 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:28.159 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:28.159 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:28.159 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:28.159 fio-3.35 00:24:28.159 Starting 4 threads 00:24:30.691 20:41:48 -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:24:30.691 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=34627584, buflen=4096 00:24:30.691 fio: pid=3611250, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:30.952 20:41:49 -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:24:30.952 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=35479552, buflen=4096 00:24:30.952 fio: pid=3611249, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:30.952 20:41:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:30.952 20:41:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:24:31.213 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=462848, buflen=4096 00:24:31.213 fio: pid=3611247, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:31.213 20:41:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:31.213 20:41:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:24:31.213 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=34185216, buflen=4096 00:24:31.213 fio: pid=3611248, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:31.213 20:41:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:31.213 20:41:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:24:31.213 00:24:31.213 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3611247: Fri Apr 26 20:41:49 2024 00:24:31.213 read: IOPS=38, BW=154KiB/s (158kB/s)(452KiB/2927msec) 00:24:31.213 slat (nsec): min=4164, max=86069, avg=21028.58, stdev=13508.79 00:24:31.213 clat (usec): min=336, max=59784, avg=25865.87, stdev=20460.59 00:24:31.213 lat (usec): min=348, max=59816, avg=25886.79, stdev=20470.42 00:24:31.213 clat percentiles (usec): 00:24:31.213 | 1.00th=[ 343], 5.00th=[ 367], 10.00th=[ 388], 20.00th=[ 412], 00:24:31.213 | 30.00th=[ 490], 40.00th=[40633], 50.00th=[41157], 60.00th=[41681], 00:24:31.214 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:24:31.214 | 99.00th=[43254], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:24:31.214 | 99.99th=[60031] 00:24:31.214 bw ( KiB/s): min= 96, max= 440, per=0.50%, avg=164.80, stdev=153.84, samples=5 00:24:31.214 iops : min= 24, max= 110, avg=41.20, stdev=38.46, samples=5 00:24:31.214 lat (usec) : 500=31.58%, 750=5.26%, 1000=1.75% 00:24:31.214 lat (msec) : 50=59.65%, 100=0.88% 00:24:31.214 cpu : usr=0.03%, sys=0.10%, ctx=117, majf=0, minf=1 00:24:31.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.214 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.214 issued rwts: total=114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:31.214 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3611248: Fri Apr 26 20:41:49 2024 00:24:31.214 read: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(32.6MiB/3091msec) 00:24:31.214 slat (usec): min=3, max=11881, avg=11.90, stdev=221.65 00:24:31.214 clat (usec): min=197, max=42523, avg=357.24, stdev=1174.20 00:24:31.214 lat (usec): min=205, max=48000, avg=369.14, stdev=1219.72 00:24:31.214 clat percentiles (usec): 00:24:31.214 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 235], 20.00th=[ 260], 00:24:31.214 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 310], 60.00th=[ 338], 00:24:31.214 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 433], 00:24:31.214 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[ 1352], 99.95th=[41681], 00:24:31.214 | 99.99th=[42730] 00:24:31.214 bw ( KiB/s): min= 7032, max=13816, per=33.45%, avg=11070.40, stdev=2797.41, samples=5 00:24:31.214 iops : min= 1758, max= 3454, avg=2767.60, stdev=699.35, samples=5 00:24:31.214 lat (usec) : 250=16.72%, 500=82.40%, 750=0.65%, 1000=0.06% 00:24:31.214 lat (msec) : 2=0.06%, 50=0.10% 00:24:31.214 cpu : usr=0.74%, sys=2.39%, ctx=8355, majf=0, minf=1 00:24:31.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.214 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.214 issued rwts: total=8347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:31.214 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3611249: Fri Apr 26 20:41:49 2024 00:24:31.214 read: IOPS=3129, BW=12.2MiB/s (12.8MB/s)(33.8MiB/2768msec) 00:24:31.214 slat (usec): min=3, max=14840, avg= 9.61, stdev=205.35 00:24:31.214 clat (usec): min=199, max=1743, avg=308.90, stdev=43.94 00:24:31.214 lat (usec): min=205, max=15813, avg=318.51, stdev=220.36 00:24:31.214 clat percentiles (usec): 00:24:31.214 | 1.00th=[ 237], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 281], 00:24:31.214 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:24:31.214 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 371], 00:24:31.214 | 99.00th=[ 441], 99.50th=[ 486], 99.90th=[ 783], 99.95th=[ 955], 00:24:31.214 | 99.99th=[ 1745] 00:24:31.214 bw ( KiB/s): min=11904, max=13176, per=37.85%, avg=12528.00, stdev=494.61, samples=5 00:24:31.214 iops : min= 2976, max= 3294, avg=3132.00, stdev=123.65, samples=5 00:24:31.214 lat (usec) : 250=2.54%, 500=97.10%, 750=0.24%, 1000=0.07% 00:24:31.214 lat (msec) : 2=0.03% 00:24:31.214 cpu : usr=0.87%, sys=3.47%, ctx=8666, majf=0, minf=1 00:24:31.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.214 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.214 issued rwts: total=8663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:31.214 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3611250: Fri Apr 26 20:41:49 2024 00:24:31.214 read: IOPS=3233, BW=12.6MiB/s (13.2MB/s)(33.0MiB/2615msec) 00:24:31.214 slat (nsec): min=3822, max=38576, avg=6999.51, stdev=1383.58 00:24:31.214 clat (usec): min=214, max=1408, avg=301.21, stdev=44.23 00:24:31.214 lat (usec): min=220, max=1415, avg=308.21, stdev=44.24 00:24:31.214 clat percentiles (usec): 00:24:31.214 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 269], 00:24:31.214 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:24:31.214 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 375], 00:24:31.214 | 99.00th=[ 429], 99.50th=[ 478], 99.90th=[ 578], 99.95th=[ 652], 00:24:31.214 | 99.99th=[ 1401] 00:24:31.214 bw ( KiB/s): min=12096, max=13640, per=39.03%, avg=12916.80, stdev=559.96, samples=5 00:24:31.214 iops : min= 3024, max= 3410, avg=3229.20, stdev=139.99, samples=5 00:24:31.214 lat (usec) : 250=8.41%, 500=91.27%, 750=0.27%, 1000=0.02% 00:24:31.214 lat (msec) : 2=0.01% 00:24:31.214 cpu : usr=0.50%, sys=3.25%, ctx=8456, majf=0, minf=2 00:24:31.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.214 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.214 issued rwts: total=8455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:31.214 00:24:31.214 Run status group 0 (all jobs): 00:24:31.214 READ: bw=32.3MiB/s (33.9MB/s), 154KiB/s-12.6MiB/s (158kB/s-13.2MB/s), io=99.9MiB (105MB), run=2615-3091msec 00:24:31.214 00:24:31.214 Disk stats (read/write): 00:24:31.214 nvme0n1: ios=111/0, merge=0/0, ticks=2821/0, in_queue=2821, util=95.56% 00:24:31.214 nvme0n2: ios=8184/0, merge=0/0, ticks=3667/0, in_queue=3667, util=98.56% 00:24:31.214 nvme0n3: ios=8244/0, merge=0/0, ticks=2630/0, in_queue=2630, util=99.12% 00:24:31.214 nvme0n4: ios=8480/0, merge=0/0, ticks=3188/0, in_queue=3188, util=99.78% 00:24:31.474 20:41:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:31.474 20:41:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:24:31.733 20:41:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:31.733 20:41:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:24:31.733 20:41:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:31.733 20:41:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:24:31.992 20:41:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:31.992 20:41:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:24:31.992 20:41:50 -- target/fio.sh@69 -- # fio_status=0 00:24:31.992 20:41:50 -- target/fio.sh@70 -- # wait 3610811 00:24:31.992 20:41:50 -- target/fio.sh@70 -- # fio_status=4 00:24:31.992 20:41:50 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:32.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:32.559 20:41:50 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:32.559 20:41:50 -- common/autotest_common.sh@1198 -- # local i=0 00:24:32.559 20:41:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:32.559 20:41:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:32.559 20:41:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:32.559 20:41:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:32.559 20:41:50 -- common/autotest_common.sh@1210 -- # return 0 00:24:32.559 20:41:50 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:24:32.559 20:41:50 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:24:32.559 nvmf hotplug test: fio failed as expected 00:24:32.559 20:41:50 -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.559 20:41:50 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:24:32.559 20:41:50 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:24:32.559 20:41:50 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:24:32.559 20:41:50 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:24:32.559 20:41:50 -- target/fio.sh@91 -- # nvmftestfini 00:24:32.559 20:41:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:32.559 20:41:50 -- nvmf/common.sh@116 -- # sync 00:24:32.559 20:41:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:32.559 20:41:50 -- nvmf/common.sh@119 -- # set +e 00:24:32.559 20:41:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:32.559 20:41:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:32.559 rmmod nvme_tcp 00:24:32.559 rmmod nvme_fabrics 00:24:32.559 rmmod nvme_keyring 00:24:32.559 20:41:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:32.559 20:41:50 -- nvmf/common.sh@123 -- # set -e 00:24:32.559 20:41:50 -- nvmf/common.sh@124 -- # return 0 00:24:32.559 20:41:50 -- nvmf/common.sh@477 -- # '[' -n 3607505 ']' 00:24:32.559 20:41:50 -- nvmf/common.sh@478 -- # killprocess 3607505 00:24:32.559 20:41:50 -- common/autotest_common.sh@926 -- # '[' -z 3607505 ']' 00:24:32.559 20:41:50 -- common/autotest_common.sh@930 -- # kill -0 3607505 00:24:32.559 20:41:50 -- common/autotest_common.sh@931 -- # uname 00:24:32.559 20:41:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:32.559 20:41:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3607505 00:24:32.819 20:41:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:32.819 20:41:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:32.819 20:41:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3607505' 00:24:32.820 killing process with pid 3607505 00:24:32.820 20:41:50 -- common/autotest_common.sh@945 -- # kill 3607505 00:24:32.820 20:41:50 -- common/autotest_common.sh@950 -- # wait 3607505 00:24:33.080 20:41:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:33.080 20:41:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:33.080 20:41:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:33.080 20:41:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.080 20:41:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:33.080 20:41:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.080 20:41:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.080 20:41:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.619 20:41:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:35.619 00:24:35.619 real 0m26.925s 00:24:35.619 user 2m29.305s 00:24:35.619 sys 0m7.594s 00:24:35.619 20:41:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.619 20:41:53 -- common/autotest_common.sh@10 -- # set +x 00:24:35.619 ************************************ 00:24:35.619 END TEST nvmf_fio_target 00:24:35.619 ************************************ 00:24:35.619 20:41:53 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:24:35.619 20:41:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:35.619 20:41:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:35.619 20:41:53 -- common/autotest_common.sh@10 -- # set +x 00:24:35.619 ************************************ 00:24:35.619 START TEST nvmf_bdevio 00:24:35.619 ************************************ 00:24:35.619 20:41:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:24:35.619 * Looking for test storage... 00:24:35.619 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:35.619 20:41:53 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.619 20:41:53 -- nvmf/common.sh@7 -- # uname -s 00:24:35.619 20:41:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.619 20:41:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.619 20:41:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.619 20:41:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.619 20:41:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.619 20:41:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.619 20:41:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.619 20:41:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.619 20:41:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.619 20:41:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.619 20:41:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:35.619 20:41:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:35.619 20:41:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.619 20:41:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.619 20:41:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:35.619 20:41:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:35.619 20:41:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.619 20:41:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.619 20:41:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.620 20:41:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.620 20:41:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.620 20:41:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.620 20:41:53 -- paths/export.sh@5 -- # export PATH 00:24:35.620 20:41:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.620 20:41:53 -- nvmf/common.sh@46 -- # : 0 00:24:35.620 20:41:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:35.620 20:41:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:35.620 20:41:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:35.620 20:41:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.620 20:41:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.620 20:41:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:35.620 20:41:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:35.620 20:41:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:35.620 20:41:53 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.620 20:41:53 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.620 20:41:53 -- target/bdevio.sh@14 -- # nvmftestinit 00:24:35.620 20:41:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:35.620 20:41:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.620 20:41:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:35.620 20:41:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:35.620 20:41:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:35.620 20:41:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.620 20:41:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.620 20:41:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.620 20:41:53 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:35.620 20:41:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:35.620 20:41:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:35.620 20:41:53 -- common/autotest_common.sh@10 -- # set +x 00:24:40.892 20:41:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:40.892 20:41:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:40.892 20:41:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:40.892 20:41:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:40.892 20:41:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:40.892 20:41:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:40.892 20:41:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:40.892 20:41:58 -- nvmf/common.sh@294 -- # net_devs=() 00:24:40.892 20:41:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:40.892 20:41:58 -- nvmf/common.sh@295 -- # e810=() 00:24:40.892 20:41:58 -- nvmf/common.sh@295 -- # local -ga e810 00:24:40.892 20:41:58 -- nvmf/common.sh@296 -- # x722=() 00:24:40.892 20:41:58 -- nvmf/common.sh@296 -- # local -ga x722 00:24:40.892 20:41:58 -- nvmf/common.sh@297 -- # mlx=() 00:24:40.892 20:41:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:40.892 20:41:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.892 20:41:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:40.892 20:41:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:40.892 20:41:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.892 20:41:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:40.892 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:40.892 20:41:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.892 20:41:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:40.892 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:40.892 20:41:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:40.892 20:41:58 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:40.892 20:41:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.892 20:41:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.892 20:41:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.892 20:41:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.892 20:41:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:40.892 Found net devices under 0000:27:00.0: cvl_0_0 00:24:40.892 20:41:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.892 20:41:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.892 20:41:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.892 20:41:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.893 20:41:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.893 20:41:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:40.893 Found net devices under 0000:27:00.1: cvl_0_1 00:24:40.893 20:41:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.893 20:41:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:40.893 20:41:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:40.893 20:41:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:40.893 20:41:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:40.893 20:41:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:40.893 20:41:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.893 20:41:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.893 20:41:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.893 20:41:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:40.893 20:41:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.893 20:41:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.893 20:41:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:40.893 20:41:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.893 20:41:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.893 20:41:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:40.893 20:41:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:40.893 20:41:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.893 20:41:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.893 20:41:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.893 20:41:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.893 20:41:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:40.893 20:41:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.893 20:41:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.893 20:41:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.893 20:41:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:40.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:24:40.893 00:24:40.893 --- 10.0.0.2 ping statistics --- 00:24:40.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.893 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:24:40.893 20:41:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:24:40.893 00:24:40.893 --- 10.0.0.1 ping statistics --- 00:24:40.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.893 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:40.893 20:41:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.893 20:41:58 -- nvmf/common.sh@410 -- # return 0 00:24:40.893 20:41:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:40.893 20:41:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.893 20:41:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:40.893 20:41:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:40.893 20:41:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.893 20:41:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:40.893 20:41:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:40.893 20:41:58 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:40.893 20:41:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:40.893 20:41:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:40.893 20:41:58 -- common/autotest_common.sh@10 -- # set +x 00:24:40.893 20:41:58 -- nvmf/common.sh@469 -- # nvmfpid=3616068 00:24:40.893 20:41:58 -- nvmf/common.sh@470 -- # waitforlisten 3616068 00:24:40.893 20:41:58 -- common/autotest_common.sh@819 -- # '[' -z 3616068 ']' 00:24:40.893 20:41:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.893 20:41:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:40.893 20:41:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.893 20:41:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:40.893 20:41:58 -- common/autotest_common.sh@10 -- # set +x 00:24:40.893 20:41:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:24:40.893 [2024-04-26 20:41:58.494595] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:40.893 [2024-04-26 20:41:58.494665] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.893 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.893 [2024-04-26 20:41:58.584558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.893 [2024-04-26 20:41:58.682609] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:40.893 [2024-04-26 20:41:58.682785] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.893 [2024-04-26 20:41:58.682798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.893 [2024-04-26 20:41:58.682809] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.893 [2024-04-26 20:41:58.683011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:40.893 [2024-04-26 20:41:58.683149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:40.893 [2024-04-26 20:41:58.683250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.893 [2024-04-26 20:41:58.683278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:40.893 20:41:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:40.893 20:41:59 -- common/autotest_common.sh@852 -- # return 0 00:24:40.893 20:41:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:40.893 20:41:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:40.893 20:41:59 -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 20:41:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.153 20:41:59 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.153 20:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.153 20:41:59 -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 [2024-04-26 20:41:59.267108] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.153 20:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.153 20:41:59 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:41.153 20:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.153 20:41:59 -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 Malloc0 00:24:41.153 20:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.153 20:41:59 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:41.153 20:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.153 20:41:59 -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 20:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.153 20:41:59 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:41.153 20:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.153 20:41:59 -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 20:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.153 20:41:59 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.153 20:41:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.153 20:41:59 -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 [2024-04-26 20:41:59.339519] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.153 20:41:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.153 20:41:59 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:24:41.153 20:41:59 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:41.153 20:41:59 -- nvmf/common.sh@520 -- # config=() 00:24:41.153 20:41:59 -- nvmf/common.sh@520 -- # local subsystem config 00:24:41.153 20:41:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:41.153 20:41:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:41.153 { 00:24:41.153 "params": { 00:24:41.153 "name": "Nvme$subsystem", 00:24:41.153 "trtype": "$TEST_TRANSPORT", 00:24:41.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.153 "adrfam": "ipv4", 00:24:41.153 "trsvcid": "$NVMF_PORT", 00:24:41.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.153 "hdgst": ${hdgst:-false}, 00:24:41.153 "ddgst": ${ddgst:-false} 00:24:41.153 }, 00:24:41.153 "method": "bdev_nvme_attach_controller" 00:24:41.153 } 00:24:41.153 EOF 00:24:41.153 )") 00:24:41.153 20:41:59 -- nvmf/common.sh@542 -- # cat 00:24:41.153 20:41:59 -- nvmf/common.sh@544 -- # jq . 00:24:41.153 20:41:59 -- nvmf/common.sh@545 -- # IFS=, 00:24:41.153 20:41:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:41.153 "params": { 00:24:41.153 "name": "Nvme1", 00:24:41.153 "trtype": "tcp", 00:24:41.153 "traddr": "10.0.0.2", 00:24:41.153 "adrfam": "ipv4", 00:24:41.153 "trsvcid": "4420", 00:24:41.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:41.153 "hdgst": false, 00:24:41.153 "ddgst": false 00:24:41.153 }, 00:24:41.153 "method": "bdev_nvme_attach_controller" 00:24:41.153 }' 00:24:41.153 [2024-04-26 20:41:59.425553] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:41.153 [2024-04-26 20:41:59.425687] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616140 ] 00:24:41.412 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.412 [2024-04-26 20:41:59.556915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:41.412 [2024-04-26 20:41:59.650332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.412 [2024-04-26 20:41:59.650432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.412 [2024-04-26 20:41:59.650438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.669 [2024-04-26 20:41:59.870118] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:41.669 [2024-04-26 20:41:59.870156] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:41.669 I/O targets: 00:24:41.669 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:41.669 00:24:41.669 00:24:41.669 CUnit - A unit testing framework for C - Version 2.1-3 00:24:41.669 http://cunit.sourceforge.net/ 00:24:41.669 00:24:41.669 00:24:41.669 Suite: bdevio tests on: Nvme1n1 00:24:41.669 Test: blockdev write read block ...passed 00:24:41.669 Test: blockdev write zeroes read block ...passed 00:24:41.669 Test: blockdev write zeroes read no split ...passed 00:24:41.928 Test: blockdev write zeroes read split ...passed 00:24:41.928 Test: blockdev write zeroes read split partial ...passed 00:24:41.928 Test: blockdev reset ...[2024-04-26 20:42:00.101272] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.928 [2024-04-26 20:42:00.101396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:24:41.928 [2024-04-26 20:42:00.156695] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.928 passed 00:24:41.928 Test: blockdev write read 8 blocks ...passed 00:24:41.928 Test: blockdev write read size > 128k ...passed 00:24:41.928 Test: blockdev write read invalid size ...passed 00:24:41.928 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:41.928 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:41.928 Test: blockdev write read max offset ...passed 00:24:42.188 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:42.188 Test: blockdev writev readv 8 blocks ...passed 00:24:42.188 Test: blockdev writev readv 30 x 1block ...passed 00:24:42.188 Test: blockdev writev readv block ...passed 00:24:42.188 Test: blockdev writev readv size > 128k ...passed 00:24:42.188 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:42.188 Test: blockdev comparev and writev ...[2024-04-26 20:42:00.373494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:42.188 [2024-04-26 20:42:00.373537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.373557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:42.188 [2024-04-26 20:42:00.373566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.373912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:42.188 [2024-04-26 20:42:00.373923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.373936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:42.188 [2024-04-26 20:42:00.373947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.374282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:42.188 [2024-04-26 20:42:00.374292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.374305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:42.188 [2024-04-26 20:42:00.374313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.374658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:42.188 [2024-04-26 20:42:00.374669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.374683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:42.188 [2024-04-26 20:42:00.374691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.188 passed 00:24:42.188 Test: blockdev nvme passthru rw ...passed 00:24:42.188 Test: blockdev nvme passthru vendor specific ...[2024-04-26 20:42:00.457646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:42.188 [2024-04-26 20:42:00.457668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.457817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:42.188 [2024-04-26 20:42:00.457827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.457980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:42.188 [2024-04-26 20:42:00.457989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.188 [2024-04-26 20:42:00.458145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:42.188 [2024-04-26 20:42:00.458156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.188 passed 00:24:42.188 Test: blockdev nvme admin passthru ...passed 00:24:42.188 Test: blockdev copy ...passed 00:24:42.188 00:24:42.188 Run Summary: Type Total Ran Passed Failed Inactive 00:24:42.188 suites 1 1 n/a 0 0 00:24:42.188 tests 23 23 23 0 0 00:24:42.188 asserts 152 152 152 0 n/a 00:24:42.188 00:24:42.188 Elapsed time = 1.304 seconds 00:24:42.759 20:42:00 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.759 20:42:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:42.759 20:42:00 -- common/autotest_common.sh@10 -- # set +x 00:24:42.759 20:42:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:42.759 20:42:00 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:42.759 20:42:00 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:42.759 20:42:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:42.759 20:42:00 -- nvmf/common.sh@116 -- # sync 00:24:42.759 20:42:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:42.759 20:42:00 -- nvmf/common.sh@119 -- # set +e 00:24:42.759 20:42:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:42.759 20:42:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:42.759 rmmod nvme_tcp 00:24:42.759 rmmod nvme_fabrics 00:24:42.759 rmmod nvme_keyring 00:24:42.759 20:42:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:42.759 20:42:00 -- nvmf/common.sh@123 -- # set -e 00:24:42.759 20:42:00 -- nvmf/common.sh@124 -- # return 0 00:24:42.759 20:42:00 -- nvmf/common.sh@477 -- # '[' -n 3616068 ']' 00:24:42.759 20:42:00 -- nvmf/common.sh@478 -- # killprocess 3616068 00:24:42.759 20:42:00 -- common/autotest_common.sh@926 -- # '[' -z 3616068 ']' 00:24:42.759 20:42:00 -- common/autotest_common.sh@930 -- # kill -0 3616068 00:24:42.759 20:42:00 -- common/autotest_common.sh@931 -- # uname 00:24:42.759 20:42:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:42.759 20:42:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3616068 00:24:42.759 20:42:01 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:24:42.759 20:42:01 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:24:42.759 20:42:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3616068' 00:24:42.759 killing process with pid 3616068 00:24:42.759 20:42:01 -- common/autotest_common.sh@945 -- # kill 3616068 00:24:42.759 20:42:01 -- common/autotest_common.sh@950 -- # wait 3616068 00:24:43.329 20:42:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:43.329 20:42:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:43.329 20:42:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:43.329 20:42:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.329 20:42:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:43.329 20:42:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.329 20:42:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.329 20:42:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.861 20:42:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:45.861 00:24:45.861 real 0m10.081s 00:24:45.861 user 0m15.030s 00:24:45.861 sys 0m4.233s 00:24:45.861 20:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.861 20:42:03 -- common/autotest_common.sh@10 -- # set +x 00:24:45.861 ************************************ 00:24:45.861 END TEST nvmf_bdevio 00:24:45.861 ************************************ 00:24:45.861 20:42:03 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:24:45.861 20:42:03 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:45.861 20:42:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:45.861 20:42:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:45.861 20:42:03 -- common/autotest_common.sh@10 -- # set +x 00:24:45.861 ************************************ 00:24:45.861 START TEST nvmf_bdevio_no_huge 00:24:45.861 ************************************ 00:24:45.861 20:42:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:45.861 * Looking for test storage... 00:24:45.861 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:45.861 20:42:03 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.861 20:42:03 -- nvmf/common.sh@7 -- # uname -s 00:24:45.861 20:42:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.861 20:42:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.861 20:42:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.861 20:42:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.861 20:42:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.861 20:42:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.861 20:42:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.861 20:42:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.861 20:42:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.861 20:42:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.861 20:42:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:45.861 20:42:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:45.861 20:42:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.861 20:42:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.861 20:42:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:45.861 20:42:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:45.861 20:42:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.861 20:42:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.861 20:42:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.861 20:42:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.861 20:42:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.861 20:42:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.861 20:42:03 -- paths/export.sh@5 -- # export PATH 00:24:45.861 20:42:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.861 20:42:03 -- nvmf/common.sh@46 -- # : 0 00:24:45.861 20:42:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:45.861 20:42:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:45.861 20:42:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:45.861 20:42:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.861 20:42:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.861 20:42:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:45.861 20:42:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:45.861 20:42:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:45.861 20:42:03 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.861 20:42:03 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.861 20:42:03 -- target/bdevio.sh@14 -- # nvmftestinit 00:24:45.861 20:42:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:45.861 20:42:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.861 20:42:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:45.861 20:42:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:45.861 20:42:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:45.861 20:42:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.861 20:42:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.861 20:42:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.861 20:42:03 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:45.861 20:42:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:45.861 20:42:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:45.861 20:42:03 -- common/autotest_common.sh@10 -- # set +x 00:24:51.200 20:42:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:51.200 20:42:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:51.200 20:42:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:51.200 20:42:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:51.200 20:42:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:51.200 20:42:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:51.200 20:42:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:51.200 20:42:08 -- nvmf/common.sh@294 -- # net_devs=() 00:24:51.200 20:42:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:51.200 20:42:08 -- nvmf/common.sh@295 -- # e810=() 00:24:51.200 20:42:08 -- nvmf/common.sh@295 -- # local -ga e810 00:24:51.200 20:42:08 -- nvmf/common.sh@296 -- # x722=() 00:24:51.200 20:42:08 -- nvmf/common.sh@296 -- # local -ga x722 00:24:51.200 20:42:08 -- nvmf/common.sh@297 -- # mlx=() 00:24:51.200 20:42:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:51.200 20:42:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.200 20:42:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:51.200 20:42:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:51.200 20:42:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:51.200 20:42:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:51.200 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:51.200 20:42:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:51.200 20:42:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:51.200 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:51.200 20:42:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:51.200 20:42:08 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:51.200 20:42:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:51.200 20:42:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.200 20:42:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:51.200 20:42:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.200 20:42:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:51.200 Found net devices under 0000:27:00.0: cvl_0_0 00:24:51.200 20:42:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.200 20:42:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:51.201 20:42:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.201 20:42:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:51.201 20:42:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.201 20:42:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:51.201 Found net devices under 0000:27:00.1: cvl_0_1 00:24:51.201 20:42:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.201 20:42:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:51.201 20:42:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:51.201 20:42:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:51.201 20:42:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:51.201 20:42:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:51.201 20:42:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.201 20:42:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.201 20:42:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.201 20:42:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:51.201 20:42:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.201 20:42:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.201 20:42:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:51.201 20:42:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.201 20:42:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.201 20:42:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:51.201 20:42:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:51.201 20:42:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.201 20:42:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.201 20:42:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.201 20:42:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.201 20:42:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:51.201 20:42:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.201 20:42:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.201 20:42:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.201 20:42:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:51.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:24:51.201 00:24:51.201 --- 10.0.0.2 ping statistics --- 00:24:51.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.201 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:24:51.201 20:42:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:24:51.201 00:24:51.201 --- 10.0.0.1 ping statistics --- 00:24:51.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.201 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:24:51.201 20:42:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.201 20:42:08 -- nvmf/common.sh@410 -- # return 0 00:24:51.201 20:42:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:51.201 20:42:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.201 20:42:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:51.201 20:42:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:51.201 20:42:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.201 20:42:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:51.201 20:42:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:51.201 20:42:08 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:51.201 20:42:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:51.201 20:42:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:51.201 20:42:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.201 20:42:08 -- nvmf/common.sh@469 -- # nvmfpid=3621036 00:24:51.201 20:42:08 -- nvmf/common.sh@470 -- # waitforlisten 3621036 00:24:51.201 20:42:08 -- common/autotest_common.sh@819 -- # '[' -z 3621036 ']' 00:24:51.201 20:42:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.201 20:42:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:51.201 20:42:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.201 20:42:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:51.201 20:42:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.201 20:42:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:51.201 [2024-04-26 20:42:08.926560] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:51.201 [2024-04-26 20:42:08.926683] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:51.201 [2024-04-26 20:42:09.071899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.201 [2024-04-26 20:42:09.192472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:51.201 [2024-04-26 20:42:09.192668] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.201 [2024-04-26 20:42:09.192682] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.201 [2024-04-26 20:42:09.192692] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.201 [2024-04-26 20:42:09.192923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:51.201 [2024-04-26 20:42:09.193052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:51.201 [2024-04-26 20:42:09.193154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.201 [2024-04-26 20:42:09.193183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:51.460 20:42:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:51.460 20:42:09 -- common/autotest_common.sh@852 -- # return 0 00:24:51.460 20:42:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:51.460 20:42:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:51.460 20:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 20:42:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.460 20:42:09 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.460 20:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.460 20:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 [2024-04-26 20:42:09.651980] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.460 20:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.460 20:42:09 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:51.460 20:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.460 20:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 Malloc0 00:24:51.460 20:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.460 20:42:09 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.460 20:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.460 20:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 20:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.460 20:42:09 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:51.460 20:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.460 20:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 20:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.460 20:42:09 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.460 20:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.460 20:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 [2024-04-26 20:42:09.713566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.460 20:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.460 20:42:09 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:51.460 20:42:09 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:51.460 20:42:09 -- nvmf/common.sh@520 -- # config=() 00:24:51.460 20:42:09 -- nvmf/common.sh@520 -- # local subsystem config 00:24:51.460 20:42:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:51.460 20:42:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:51.460 { 00:24:51.460 "params": { 00:24:51.460 "name": "Nvme$subsystem", 00:24:51.460 "trtype": "$TEST_TRANSPORT", 00:24:51.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.460 "adrfam": "ipv4", 00:24:51.460 "trsvcid": "$NVMF_PORT", 00:24:51.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.460 "hdgst": ${hdgst:-false}, 00:24:51.460 "ddgst": ${ddgst:-false} 00:24:51.460 }, 00:24:51.460 "method": "bdev_nvme_attach_controller" 00:24:51.460 } 00:24:51.460 EOF 00:24:51.460 )") 00:24:51.460 20:42:09 -- nvmf/common.sh@542 -- # cat 00:24:51.460 20:42:09 -- nvmf/common.sh@544 -- # jq . 00:24:51.460 20:42:09 -- nvmf/common.sh@545 -- # IFS=, 00:24:51.460 20:42:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:51.461 "params": { 00:24:51.461 "name": "Nvme1", 00:24:51.461 "trtype": "tcp", 00:24:51.461 "traddr": "10.0.0.2", 00:24:51.461 "adrfam": "ipv4", 00:24:51.461 "trsvcid": "4420", 00:24:51.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.461 "hdgst": false, 00:24:51.461 "ddgst": false 00:24:51.461 }, 00:24:51.461 "method": "bdev_nvme_attach_controller" 00:24:51.461 }' 00:24:51.461 [2024-04-26 20:42:09.783005] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:51.461 [2024-04-26 20:42:09.783111] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3621161 ] 00:24:51.720 [2024-04-26 20:42:09.915562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:51.720 [2024-04-26 20:42:10.040586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.720 [2024-04-26 20:42:10.040694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.720 [2024-04-26 20:42:10.040699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.980 [2024-04-26 20:42:10.260717] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:51.980 [2024-04-26 20:42:10.260760] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:51.980 I/O targets: 00:24:51.980 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:51.980 00:24:51.980 00:24:51.980 CUnit - A unit testing framework for C - Version 2.1-3 00:24:51.980 http://cunit.sourceforge.net/ 00:24:51.980 00:24:51.980 00:24:51.980 Suite: bdevio tests on: Nvme1n1 00:24:51.980 Test: blockdev write read block ...passed 00:24:52.239 Test: blockdev write zeroes read block ...passed 00:24:52.239 Test: blockdev write zeroes read no split ...passed 00:24:52.239 Test: blockdev write zeroes read split ...passed 00:24:52.239 Test: blockdev write zeroes read split partial ...passed 00:24:52.239 Test: blockdev reset ...[2024-04-26 20:42:10.480618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.239 [2024-04-26 20:42:10.480736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000002f80 (9): Bad file descriptor 00:24:52.239 [2024-04-26 20:42:10.500907] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:52.239 passed 00:24:52.239 Test: blockdev write read 8 blocks ...passed 00:24:52.239 Test: blockdev write read size > 128k ...passed 00:24:52.239 Test: blockdev write read invalid size ...passed 00:24:52.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:52.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:52.239 Test: blockdev write read max offset ...passed 00:24:52.499 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:52.499 Test: blockdev writev readv 8 blocks ...passed 00:24:52.499 Test: blockdev writev readv 30 x 1block ...passed 00:24:52.499 Test: blockdev writev readv block ...passed 00:24:52.499 Test: blockdev writev readv size > 128k ...passed 00:24:52.499 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:52.499 Test: blockdev comparev and writev ...[2024-04-26 20:42:10.680401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:52.499 [2024-04-26 20:42:10.680445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.680463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:52.499 [2024-04-26 20:42:10.680473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.680837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:52.499 [2024-04-26 20:42:10.680849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.680865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:52.499 [2024-04-26 20:42:10.680874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.681239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:52.499 [2024-04-26 20:42:10.681250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.681264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:52.499 [2024-04-26 20:42:10.681273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.681622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:52.499 [2024-04-26 20:42:10.681635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.681649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:52.499 [2024-04-26 20:42:10.681657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:52.499 passed 00:24:52.499 Test: blockdev nvme passthru rw ...passed 00:24:52.499 Test: blockdev nvme passthru vendor specific ...[2024-04-26 20:42:10.765976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:52.499 [2024-04-26 20:42:10.766001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.766242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:52.499 [2024-04-26 20:42:10.766257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.766484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:52.499 [2024-04-26 20:42:10.766494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:52.499 [2024-04-26 20:42:10.766730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:52.499 [2024-04-26 20:42:10.766741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:52.499 passed 00:24:52.499 Test: blockdev nvme admin passthru ...passed 00:24:52.499 Test: blockdev copy ...passed 00:24:52.499 00:24:52.499 Run Summary: Type Total Ran Passed Failed Inactive 00:24:52.499 suites 1 1 n/a 0 0 00:24:52.499 tests 23 23 23 0 0 00:24:52.499 asserts 152 152 152 0 n/a 00:24:52.499 00:24:52.499 Elapsed time = 1.131 seconds 00:24:53.069 20:42:11 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:53.069 20:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.069 20:42:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.069 20:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.069 20:42:11 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:53.069 20:42:11 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:53.069 20:42:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:53.069 20:42:11 -- nvmf/common.sh@116 -- # sync 00:24:53.069 20:42:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:53.069 20:42:11 -- nvmf/common.sh@119 -- # set +e 00:24:53.069 20:42:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:53.069 20:42:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:53.069 rmmod nvme_tcp 00:24:53.069 rmmod nvme_fabrics 00:24:53.069 rmmod nvme_keyring 00:24:53.069 20:42:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:53.069 20:42:11 -- nvmf/common.sh@123 -- # set -e 00:24:53.069 20:42:11 -- nvmf/common.sh@124 -- # return 0 00:24:53.069 20:42:11 -- nvmf/common.sh@477 -- # '[' -n 3621036 ']' 00:24:53.069 20:42:11 -- nvmf/common.sh@478 -- # killprocess 3621036 00:24:53.069 20:42:11 -- common/autotest_common.sh@926 -- # '[' -z 3621036 ']' 00:24:53.069 20:42:11 -- common/autotest_common.sh@930 -- # kill -0 3621036 00:24:53.069 20:42:11 -- common/autotest_common.sh@931 -- # uname 00:24:53.069 20:42:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:53.069 20:42:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3621036 00:24:53.069 20:42:11 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:24:53.069 20:42:11 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:24:53.069 20:42:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3621036' 00:24:53.069 killing process with pid 3621036 00:24:53.069 20:42:11 -- common/autotest_common.sh@945 -- # kill 3621036 00:24:53.069 20:42:11 -- common/autotest_common.sh@950 -- # wait 3621036 00:24:53.636 20:42:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:53.636 20:42:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:53.636 20:42:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:53.636 20:42:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.636 20:42:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:53.636 20:42:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.636 20:42:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.636 20:42:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.544 20:42:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:55.544 00:24:55.544 real 0m10.104s 00:24:55.544 user 0m13.548s 00:24:55.544 sys 0m4.706s 00:24:55.544 20:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.544 20:42:13 -- common/autotest_common.sh@10 -- # set +x 00:24:55.544 ************************************ 00:24:55.544 END TEST nvmf_bdevio_no_huge 00:24:55.544 ************************************ 00:24:55.544 20:42:13 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:55.544 20:42:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:55.544 20:42:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:55.544 20:42:13 -- common/autotest_common.sh@10 -- # set +x 00:24:55.544 ************************************ 00:24:55.544 START TEST nvmf_tls 00:24:55.544 ************************************ 00:24:55.544 20:42:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:55.544 * Looking for test storage... 00:24:55.544 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:55.544 20:42:13 -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.544 20:42:13 -- nvmf/common.sh@7 -- # uname -s 00:24:55.544 20:42:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.544 20:42:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.544 20:42:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.544 20:42:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.544 20:42:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.544 20:42:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.544 20:42:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.544 20:42:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.544 20:42:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.544 20:42:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.544 20:42:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:55.544 20:42:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:55.544 20:42:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.544 20:42:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.544 20:42:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:55.544 20:42:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:55.544 20:42:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.544 20:42:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.544 20:42:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.544 20:42:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.544 20:42:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.544 20:42:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.544 20:42:13 -- paths/export.sh@5 -- # export PATH 00:24:55.545 20:42:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.545 20:42:13 -- nvmf/common.sh@46 -- # : 0 00:24:55.545 20:42:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:55.545 20:42:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:55.545 20:42:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:55.545 20:42:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.545 20:42:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.545 20:42:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:55.545 20:42:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:55.545 20:42:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:55.545 20:42:13 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:24:55.545 20:42:13 -- target/tls.sh@71 -- # nvmftestinit 00:24:55.545 20:42:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:55.545 20:42:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.545 20:42:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:55.545 20:42:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:55.545 20:42:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:55.545 20:42:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.545 20:42:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.545 20:42:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.545 20:42:13 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:55.545 20:42:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:55.545 20:42:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:55.545 20:42:13 -- common/autotest_common.sh@10 -- # set +x 00:25:02.124 20:42:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:02.124 20:42:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:02.124 20:42:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:02.124 20:42:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:02.124 20:42:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:02.124 20:42:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:02.124 20:42:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:02.124 20:42:19 -- nvmf/common.sh@294 -- # net_devs=() 00:25:02.124 20:42:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:02.124 20:42:19 -- nvmf/common.sh@295 -- # e810=() 00:25:02.124 20:42:19 -- nvmf/common.sh@295 -- # local -ga e810 00:25:02.124 20:42:19 -- nvmf/common.sh@296 -- # x722=() 00:25:02.124 20:42:19 -- nvmf/common.sh@296 -- # local -ga x722 00:25:02.124 20:42:19 -- nvmf/common.sh@297 -- # mlx=() 00:25:02.124 20:42:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:02.124 20:42:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.124 20:42:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:02.124 20:42:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:02.124 20:42:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.124 20:42:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:02.124 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:02.124 20:42:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.124 20:42:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:02.124 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:02.124 20:42:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:02.124 20:42:19 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.124 20:42:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.124 20:42:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.124 20:42:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.124 20:42:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:02.124 Found net devices under 0000:27:00.0: cvl_0_0 00:25:02.124 20:42:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.124 20:42:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.124 20:42:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.124 20:42:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.124 20:42:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.124 20:42:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:02.124 Found net devices under 0000:27:00.1: cvl_0_1 00:25:02.124 20:42:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.124 20:42:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:02.124 20:42:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:02.124 20:42:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:02.124 20:42:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:02.124 20:42:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.124 20:42:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.124 20:42:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.124 20:42:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:02.124 20:42:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.124 20:42:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.124 20:42:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:02.124 20:42:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.124 20:42:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.124 20:42:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:02.124 20:42:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:02.124 20:42:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.124 20:42:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.124 20:42:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.124 20:42:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.124 20:42:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:02.124 20:42:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.124 20:42:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.124 20:42:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.124 20:42:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:02.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:02.124 00:25:02.124 --- 10.0.0.2 ping statistics --- 00:25:02.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.124 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:02.125 20:42:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:25:02.125 00:25:02.125 --- 10.0.0.1 ping statistics --- 00:25:02.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.125 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:25:02.125 20:42:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.125 20:42:19 -- nvmf/common.sh@410 -- # return 0 00:25:02.125 20:42:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:02.125 20:42:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.125 20:42:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:02.125 20:42:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:02.125 20:42:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.125 20:42:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:02.125 20:42:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:02.125 20:42:19 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:02.125 20:42:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:02.125 20:42:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:02.125 20:42:19 -- common/autotest_common.sh@10 -- # set +x 00:25:02.125 20:42:19 -- nvmf/common.sh@469 -- # nvmfpid=3625520 00:25:02.125 20:42:19 -- nvmf/common.sh@470 -- # waitforlisten 3625520 00:25:02.125 20:42:19 -- common/autotest_common.sh@819 -- # '[' -z 3625520 ']' 00:25:02.125 20:42:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.125 20:42:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:02.125 20:42:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.125 20:42:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:02.125 20:42:19 -- common/autotest_common.sh@10 -- # set +x 00:25:02.125 20:42:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:02.125 [2024-04-26 20:42:19.774001] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:02.125 [2024-04-26 20:42:19.774112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.125 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.125 [2024-04-26 20:42:19.899914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.125 [2024-04-26 20:42:19.996281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:02.125 [2024-04-26 20:42:19.996466] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.125 [2024-04-26 20:42:19.996481] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.125 [2024-04-26 20:42:19.996491] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.125 [2024-04-26 20:42:19.996526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.386 20:42:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:02.386 20:42:20 -- common/autotest_common.sh@852 -- # return 0 00:25:02.386 20:42:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:02.386 20:42:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:02.386 20:42:20 -- common/autotest_common.sh@10 -- # set +x 00:25:02.386 20:42:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.386 20:42:20 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:25:02.386 20:42:20 -- target/tls.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:02.386 true 00:25:02.386 20:42:20 -- target/tls.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:02.386 20:42:20 -- target/tls.sh@82 -- # jq -r .tls_version 00:25:02.646 20:42:20 -- target/tls.sh@82 -- # version=0 00:25:02.646 20:42:20 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:25:02.646 20:42:20 -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:02.646 20:42:20 -- target/tls.sh@90 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:02.646 20:42:20 -- target/tls.sh@90 -- # jq -r .tls_version 00:25:02.906 20:42:21 -- target/tls.sh@90 -- # version=13 00:25:02.906 20:42:21 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:25:02.906 20:42:21 -- target/tls.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:02.906 20:42:21 -- target/tls.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:02.906 20:42:21 -- target/tls.sh@98 -- # jq -r .tls_version 00:25:03.164 20:42:21 -- target/tls.sh@98 -- # version=7 00:25:03.164 20:42:21 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:25:03.164 20:42:21 -- target/tls.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:03.164 20:42:21 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:25:03.164 20:42:21 -- target/tls.sh@105 -- # ktls=false 00:25:03.164 20:42:21 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:25:03.164 20:42:21 -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:03.424 20:42:21 -- target/tls.sh@113 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:03.424 20:42:21 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:25:03.424 20:42:21 -- target/tls.sh@113 -- # ktls=true 00:25:03.424 20:42:21 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:25:03.424 20:42:21 -- target/tls.sh@120 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:03.684 20:42:21 -- target/tls.sh@121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:03.684 20:42:21 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:25:03.684 20:42:22 -- target/tls.sh@121 -- # ktls=false 00:25:03.684 20:42:22 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:25:03.684 20:42:22 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:25:03.684 20:42:22 -- target/tls.sh@49 -- # local key hash crc 00:25:03.684 20:42:22 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:25:03.684 20:42:22 -- target/tls.sh@51 -- # hash=01 00:25:03.684 20:42:22 -- target/tls.sh@52 -- # gzip -1 -c 00:25:03.684 20:42:22 -- target/tls.sh@52 -- # head -c 4 00:25:03.684 20:42:22 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:25:03.684 20:42:22 -- target/tls.sh@52 -- # tail -c8 00:25:03.684 20:42:22 -- target/tls.sh@52 -- # crc='p$H�' 00:25:03.944 20:42:22 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:25:03.944 20:42:22 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:25:03.944 20:42:22 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:03.944 20:42:22 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:03.944 20:42:22 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:25:03.944 20:42:22 -- target/tls.sh@49 -- # local key hash crc 00:25:03.944 20:42:22 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:25:03.944 20:42:22 -- target/tls.sh@51 -- # hash=01 00:25:03.944 20:42:22 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:25:03.944 20:42:22 -- target/tls.sh@52 -- # gzip -1 -c 00:25:03.944 20:42:22 -- target/tls.sh@52 -- # tail -c8 00:25:03.944 20:42:22 -- target/tls.sh@52 -- # head -c 4 00:25:03.944 20:42:22 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:25:03.944 20:42:22 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:25:03.944 20:42:22 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:25:03.944 20:42:22 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:03.944 20:42:22 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:03.944 20:42:22 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:03.944 20:42:22 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:03.944 20:42:22 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:03.944 20:42:22 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:03.944 20:42:22 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:03.944 20:42:22 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:03.944 20:42:22 -- target/tls.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:03.944 20:42:22 -- target/tls.sh@140 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:25:04.271 20:42:22 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:04.271 20:42:22 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:04.271 20:42:22 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:04.533 [2024-04-26 20:42:22.624301] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.533 20:42:22 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:04.533 20:42:22 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:04.793 [2024-04-26 20:42:22.920378] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:04.793 [2024-04-26 20:42:22.920676] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.793 20:42:22 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:04.793 malloc0 00:25:04.793 20:42:23 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:05.053 20:42:23 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:05.312 20:42:23 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:05.312 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.302 Initializing NVMe Controllers 00:25:15.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:15.302 Initialization complete. Launching workers. 00:25:15.302 ======================================================== 00:25:15.302 Latency(us) 00:25:15.302 Device Information : IOPS MiB/s Average min max 00:25:15.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17867.96 69.80 3582.09 1149.49 6894.65 00:25:15.302 ======================================================== 00:25:15.302 Total : 17867.96 69.80 3582.09 1149.49 6894.65 00:25:15.302 00:25:15.302 20:42:33 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:15.302 20:42:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:15.302 20:42:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:15.302 20:42:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:15.303 20:42:33 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:25:15.303 20:42:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:15.303 20:42:33 -- target/tls.sh@28 -- # bdevperf_pid=3628155 00:25:15.303 20:42:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:15.303 20:42:33 -- target/tls.sh@31 -- # waitforlisten 3628155 /var/tmp/bdevperf.sock 00:25:15.303 20:42:33 -- common/autotest_common.sh@819 -- # '[' -z 3628155 ']' 00:25:15.303 20:42:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.303 20:42:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:15.303 20:42:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.303 20:42:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:15.303 20:42:33 -- common/autotest_common.sh@10 -- # set +x 00:25:15.303 20:42:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:15.560 [2024-04-26 20:42:33.650585] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:15.560 [2024-04-26 20:42:33.650704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3628155 ] 00:25:15.560 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.560 [2024-04-26 20:42:33.766459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.560 [2024-04-26 20:42:33.854870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.130 20:42:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:16.130 20:42:34 -- common/autotest_common.sh@852 -- # return 0 00:25:16.130 20:42:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:16.391 [2024-04-26 20:42:34.485756] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:16.391 TLSTESTn1 00:25:16.391 20:42:34 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:16.391 Running I/O for 10 seconds... 00:25:26.396 00:25:26.396 Latency(us) 00:25:26.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.397 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:26.397 Verification LBA range: start 0x0 length 0x2000 00:25:26.397 TLSTESTn1 : 10.01 4803.93 18.77 0.00 0.00 26619.06 4363.32 60707.03 00:25:26.397 =================================================================================================================== 00:25:26.397 Total : 4803.93 18.77 0.00 0.00 26619.06 4363.32 60707.03 00:25:26.397 0 00:25:26.397 20:42:44 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:26.397 20:42:44 -- target/tls.sh@45 -- # killprocess 3628155 00:25:26.397 20:42:44 -- common/autotest_common.sh@926 -- # '[' -z 3628155 ']' 00:25:26.397 20:42:44 -- common/autotest_common.sh@930 -- # kill -0 3628155 00:25:26.397 20:42:44 -- common/autotest_common.sh@931 -- # uname 00:25:26.397 20:42:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:26.397 20:42:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3628155 00:25:26.397 20:42:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:26.397 20:42:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:26.397 20:42:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3628155' 00:25:26.397 killing process with pid 3628155 00:25:26.397 20:42:44 -- common/autotest_common.sh@945 -- # kill 3628155 00:25:26.397 Received shutdown signal, test time was about 10.000000 seconds 00:25:26.397 00:25:26.397 Latency(us) 00:25:26.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.397 =================================================================================================================== 00:25:26.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.397 20:42:44 -- common/autotest_common.sh@950 -- # wait 3628155 00:25:26.965 20:42:45 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:26.965 20:42:45 -- common/autotest_common.sh@640 -- # local es=0 00:25:26.965 20:42:45 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:26.965 20:42:45 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:26.965 20:42:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.965 20:42:45 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:26.965 20:42:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.965 20:42:45 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:26.965 20:42:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:26.965 20:42:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:26.965 20:42:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:26.965 20:42:45 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:25:26.965 20:42:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:26.965 20:42:45 -- target/tls.sh@28 -- # bdevperf_pid=3630542 00:25:26.965 20:42:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:26.965 20:42:45 -- target/tls.sh@31 -- # waitforlisten 3630542 /var/tmp/bdevperf.sock 00:25:26.965 20:42:45 -- common/autotest_common.sh@819 -- # '[' -z 3630542 ']' 00:25:26.965 20:42:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:26.965 20:42:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:26.965 20:42:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:26.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:26.965 20:42:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:26.965 20:42:45 -- common/autotest_common.sh@10 -- # set +x 00:25:26.965 20:42:45 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:26.965 [2024-04-26 20:42:45.187792] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:26.965 [2024-04-26 20:42:45.187918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630542 ] 00:25:26.965 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.965 [2024-04-26 20:42:45.299783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.224 [2024-04-26 20:42:45.388548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.792 20:42:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:27.792 20:42:45 -- common/autotest_common.sh@852 -- # return 0 00:25:27.792 20:42:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:27.792 [2024-04-26 20:42:45.997912] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:27.792 [2024-04-26 20:42:46.008018] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:27.792 [2024-04-26 20:42:46.008524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:27.792 [2024-04-26 20:42:46.009501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:27.792 [2024-04-26 20:42:46.010499] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.792 [2024-04-26 20:42:46.010515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:27.792 [2024-04-26 20:42:46.010532] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.792 request: 00:25:27.792 { 00:25:27.792 "name": "TLSTEST", 00:25:27.793 "trtype": "tcp", 00:25:27.793 "traddr": "10.0.0.2", 00:25:27.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:27.793 "adrfam": "ipv4", 00:25:27.793 "trsvcid": "4420", 00:25:27.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.793 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:25:27.793 "method": "bdev_nvme_attach_controller", 00:25:27.793 "req_id": 1 00:25:27.793 } 00:25:27.793 Got JSON-RPC error response 00:25:27.793 response: 00:25:27.793 { 00:25:27.793 "code": -32602, 00:25:27.793 "message": "Invalid parameters" 00:25:27.793 } 00:25:27.793 20:42:46 -- target/tls.sh@36 -- # killprocess 3630542 00:25:27.793 20:42:46 -- common/autotest_common.sh@926 -- # '[' -z 3630542 ']' 00:25:27.793 20:42:46 -- common/autotest_common.sh@930 -- # kill -0 3630542 00:25:27.793 20:42:46 -- common/autotest_common.sh@931 -- # uname 00:25:27.793 20:42:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:27.793 20:42:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3630542 00:25:27.793 20:42:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:27.793 20:42:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:27.793 20:42:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3630542' 00:25:27.793 killing process with pid 3630542 00:25:27.793 20:42:46 -- common/autotest_common.sh@945 -- # kill 3630542 00:25:27.793 Received shutdown signal, test time was about 10.000000 seconds 00:25:27.793 00:25:27.793 Latency(us) 00:25:27.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.793 =================================================================================================================== 00:25:27.793 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:27.793 20:42:46 -- common/autotest_common.sh@950 -- # wait 3630542 00:25:28.366 20:42:46 -- target/tls.sh@37 -- # return 1 00:25:28.366 20:42:46 -- common/autotest_common.sh@643 -- # es=1 00:25:28.366 20:42:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:28.366 20:42:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:28.366 20:42:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:28.366 20:42:46 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:28.366 20:42:46 -- common/autotest_common.sh@640 -- # local es=0 00:25:28.366 20:42:46 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:28.366 20:42:46 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:28.366 20:42:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:28.366 20:42:46 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:28.366 20:42:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:28.366 20:42:46 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:28.366 20:42:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:28.366 20:42:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:28.366 20:42:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:28.366 20:42:46 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:25:28.366 20:42:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.366 20:42:46 -- target/tls.sh@28 -- # bdevperf_pid=3630767 00:25:28.366 20:42:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:28.366 20:42:46 -- target/tls.sh@31 -- # waitforlisten 3630767 /var/tmp/bdevperf.sock 00:25:28.366 20:42:46 -- common/autotest_common.sh@819 -- # '[' -z 3630767 ']' 00:25:28.366 20:42:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.366 20:42:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:28.366 20:42:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.366 20:42:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:28.366 20:42:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.366 20:42:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:28.366 [2024-04-26 20:42:46.519274] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:28.366 [2024-04-26 20:42:46.519425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630767 ] 00:25:28.366 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.366 [2024-04-26 20:42:46.649906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.625 [2024-04-26 20:42:46.744893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.193 20:42:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:29.193 20:42:47 -- common/autotest_common.sh@852 -- # return 0 00:25:29.193 20:42:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:29.193 [2024-04-26 20:42:47.354861] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:29.193 [2024-04-26 20:42:47.367929] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:29.193 [2024-04-26 20:42:47.367956] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:29.193 [2024-04-26 20:42:47.367992] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:29.193 [2024-04-26 20:42:47.368618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:29.193 [2024-04-26 20:42:47.369596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:29.193 [2024-04-26 20:42:47.370590] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.193 [2024-04-26 20:42:47.370607] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:29.193 [2024-04-26 20:42:47.370620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.193 request: 00:25:29.193 { 00:25:29.193 "name": "TLSTEST", 00:25:29.193 "trtype": "tcp", 00:25:29.193 "traddr": "10.0.0.2", 00:25:29.193 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:29.193 "adrfam": "ipv4", 00:25:29.193 "trsvcid": "4420", 00:25:29.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.193 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:25:29.193 "method": "bdev_nvme_attach_controller", 00:25:29.193 "req_id": 1 00:25:29.193 } 00:25:29.193 Got JSON-RPC error response 00:25:29.193 response: 00:25:29.193 { 00:25:29.193 "code": -32602, 00:25:29.193 "message": "Invalid parameters" 00:25:29.193 } 00:25:29.193 20:42:47 -- target/tls.sh@36 -- # killprocess 3630767 00:25:29.193 20:42:47 -- common/autotest_common.sh@926 -- # '[' -z 3630767 ']' 00:25:29.193 20:42:47 -- common/autotest_common.sh@930 -- # kill -0 3630767 00:25:29.193 20:42:47 -- common/autotest_common.sh@931 -- # uname 00:25:29.193 20:42:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:29.193 20:42:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3630767 00:25:29.193 20:42:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:29.193 20:42:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:29.193 20:42:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3630767' 00:25:29.193 killing process with pid 3630767 00:25:29.193 20:42:47 -- common/autotest_common.sh@945 -- # kill 3630767 00:25:29.193 Received shutdown signal, test time was about 10.000000 seconds 00:25:29.193 00:25:29.193 Latency(us) 00:25:29.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.193 =================================================================================================================== 00:25:29.193 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:29.193 20:42:47 -- common/autotest_common.sh@950 -- # wait 3630767 00:25:29.451 20:42:47 -- target/tls.sh@37 -- # return 1 00:25:29.451 20:42:47 -- common/autotest_common.sh@643 -- # es=1 00:25:29.451 20:42:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:29.451 20:42:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:29.451 20:42:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:29.451 20:42:47 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:29.451 20:42:47 -- common/autotest_common.sh@640 -- # local es=0 00:25:29.451 20:42:47 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:29.451 20:42:47 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:29.451 20:42:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:29.451 20:42:47 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:29.451 20:42:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:29.451 20:42:47 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:29.451 20:42:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:29.451 20:42:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:29.451 20:42:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:29.451 20:42:47 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:25:29.451 20:42:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:29.451 20:42:47 -- target/tls.sh@28 -- # bdevperf_pid=3630956 00:25:29.451 20:42:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:29.451 20:42:47 -- target/tls.sh@31 -- # waitforlisten 3630956 /var/tmp/bdevperf.sock 00:25:29.451 20:42:47 -- common/autotest_common.sh@819 -- # '[' -z 3630956 ']' 00:25:29.451 20:42:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:29.451 20:42:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:29.451 20:42:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:29.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:29.451 20:42:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:29.451 20:42:47 -- common/autotest_common.sh@10 -- # set +x 00:25:29.451 20:42:47 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:29.711 [2024-04-26 20:42:47.848579] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:29.711 [2024-04-26 20:42:47.848691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630956 ] 00:25:29.711 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.711 [2024-04-26 20:42:47.962208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.972 [2024-04-26 20:42:48.057783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.233 20:42:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:30.233 20:42:48 -- common/autotest_common.sh@852 -- # return 0 00:25:30.233 20:42:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:30.495 [2024-04-26 20:42:48.690745] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:30.495 [2024-04-26 20:42:48.698492] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:30.495 [2024-04-26 20:42:48.698520] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:30.495 [2024-04-26 20:42:48.698555] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:30.495 [2024-04-26 20:42:48.698879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:30.495 [2024-04-26 20:42:48.699856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:30.495 [2024-04-26 20:42:48.700850] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:30.495 [2024-04-26 20:42:48.700868] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:30.495 [2024-04-26 20:42:48.700886] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:30.495 request: 00:25:30.495 { 00:25:30.495 "name": "TLSTEST", 00:25:30.495 "trtype": "tcp", 00:25:30.495 "traddr": "10.0.0.2", 00:25:30.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.495 "adrfam": "ipv4", 00:25:30.495 "trsvcid": "4420", 00:25:30.495 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:30.495 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:25:30.495 "method": "bdev_nvme_attach_controller", 00:25:30.495 "req_id": 1 00:25:30.495 } 00:25:30.495 Got JSON-RPC error response 00:25:30.495 response: 00:25:30.495 { 00:25:30.495 "code": -32602, 00:25:30.495 "message": "Invalid parameters" 00:25:30.495 } 00:25:30.495 20:42:48 -- target/tls.sh@36 -- # killprocess 3630956 00:25:30.495 20:42:48 -- common/autotest_common.sh@926 -- # '[' -z 3630956 ']' 00:25:30.495 20:42:48 -- common/autotest_common.sh@930 -- # kill -0 3630956 00:25:30.495 20:42:48 -- common/autotest_common.sh@931 -- # uname 00:25:30.495 20:42:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:30.495 20:42:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3630956 00:25:30.495 20:42:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:30.495 20:42:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:30.495 20:42:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3630956' 00:25:30.495 killing process with pid 3630956 00:25:30.495 20:42:48 -- common/autotest_common.sh@945 -- # kill 3630956 00:25:30.495 Received shutdown signal, test time was about 10.000000 seconds 00:25:30.495 00:25:30.495 Latency(us) 00:25:30.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.495 =================================================================================================================== 00:25:30.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:30.495 20:42:48 -- common/autotest_common.sh@950 -- # wait 3630956 00:25:31.062 20:42:49 -- target/tls.sh@37 -- # return 1 00:25:31.062 20:42:49 -- common/autotest_common.sh@643 -- # es=1 00:25:31.062 20:42:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:31.062 20:42:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:31.062 20:42:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:31.062 20:42:49 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:31.062 20:42:49 -- common/autotest_common.sh@640 -- # local es=0 00:25:31.062 20:42:49 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:31.062 20:42:49 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:31.062 20:42:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:31.062 20:42:49 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:31.062 20:42:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:31.062 20:42:49 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:31.062 20:42:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:31.062 20:42:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:31.062 20:42:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:31.062 20:42:49 -- target/tls.sh@23 -- # psk= 00:25:31.062 20:42:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:31.062 20:42:49 -- target/tls.sh@28 -- # bdevperf_pid=3631190 00:25:31.062 20:42:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:31.062 20:42:49 -- target/tls.sh@31 -- # waitforlisten 3631190 /var/tmp/bdevperf.sock 00:25:31.062 20:42:49 -- common/autotest_common.sh@819 -- # '[' -z 3631190 ']' 00:25:31.062 20:42:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.062 20:42:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:31.062 20:42:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.062 20:42:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:31.062 20:42:49 -- common/autotest_common.sh@10 -- # set +x 00:25:31.062 20:42:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:31.062 [2024-04-26 20:42:49.189629] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:31.062 [2024-04-26 20:42:49.189746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631190 ] 00:25:31.062 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.062 [2024-04-26 20:42:49.305059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.062 [2024-04-26 20:42:49.394261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.630 20:42:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:31.630 20:42:49 -- common/autotest_common.sh@852 -- # return 0 00:25:31.630 20:42:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:31.890 [2024-04-26 20:42:50.008699] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:31.890 [2024-04-26 20:42:50.010693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:25:31.890 [2024-04-26 20:42:50.011684] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.890 [2024-04-26 20:42:50.011704] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:31.890 [2024-04-26 20:42:50.011720] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.890 request: 00:25:31.890 { 00:25:31.890 "name": "TLSTEST", 00:25:31.890 "trtype": "tcp", 00:25:31.890 "traddr": "10.0.0.2", 00:25:31.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.890 "adrfam": "ipv4", 00:25:31.890 "trsvcid": "4420", 00:25:31.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.890 "method": "bdev_nvme_attach_controller", 00:25:31.890 "req_id": 1 00:25:31.890 } 00:25:31.890 Got JSON-RPC error response 00:25:31.890 response: 00:25:31.890 { 00:25:31.890 "code": -32602, 00:25:31.890 "message": "Invalid parameters" 00:25:31.890 } 00:25:31.890 20:42:50 -- target/tls.sh@36 -- # killprocess 3631190 00:25:31.890 20:42:50 -- common/autotest_common.sh@926 -- # '[' -z 3631190 ']' 00:25:31.890 20:42:50 -- common/autotest_common.sh@930 -- # kill -0 3631190 00:25:31.890 20:42:50 -- common/autotest_common.sh@931 -- # uname 00:25:31.890 20:42:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:31.890 20:42:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3631190 00:25:31.890 20:42:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:31.890 20:42:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:31.890 20:42:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3631190' 00:25:31.890 killing process with pid 3631190 00:25:31.890 20:42:50 -- common/autotest_common.sh@945 -- # kill 3631190 00:25:31.890 Received shutdown signal, test time was about 10.000000 seconds 00:25:31.890 00:25:31.890 Latency(us) 00:25:31.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.890 =================================================================================================================== 00:25:31.890 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:31.890 20:42:50 -- common/autotest_common.sh@950 -- # wait 3631190 00:25:32.154 20:42:50 -- target/tls.sh@37 -- # return 1 00:25:32.154 20:42:50 -- common/autotest_common.sh@643 -- # es=1 00:25:32.154 20:42:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:32.154 20:42:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:32.154 20:42:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:32.154 20:42:50 -- target/tls.sh@167 -- # killprocess 3625520 00:25:32.154 20:42:50 -- common/autotest_common.sh@926 -- # '[' -z 3625520 ']' 00:25:32.154 20:42:50 -- common/autotest_common.sh@930 -- # kill -0 3625520 00:25:32.154 20:42:50 -- common/autotest_common.sh@931 -- # uname 00:25:32.154 20:42:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:32.154 20:42:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3625520 00:25:32.154 20:42:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:32.154 20:42:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:32.154 20:42:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3625520' 00:25:32.154 killing process with pid 3625520 00:25:32.154 20:42:50 -- common/autotest_common.sh@945 -- # kill 3625520 00:25:32.154 20:42:50 -- common/autotest_common.sh@950 -- # wait 3625520 00:25:32.802 20:42:51 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:25:32.802 20:42:51 -- target/tls.sh@49 -- # local key hash crc 00:25:32.802 20:42:51 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:32.802 20:42:51 -- target/tls.sh@51 -- # hash=02 00:25:32.802 20:42:51 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:25:32.802 20:42:51 -- target/tls.sh@52 -- # tail -c8 00:25:32.802 20:42:51 -- target/tls.sh@52 -- # gzip -1 -c 00:25:32.802 20:42:51 -- target/tls.sh@52 -- # head -c 4 00:25:32.802 20:42:51 -- target/tls.sh@52 -- # crc='�e�'\''' 00:25:32.802 20:42:51 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:25:32.802 20:42:51 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:25:32.802 20:42:51 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:32.802 20:42:51 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:32.802 20:42:51 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:32.802 20:42:51 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:32.802 20:42:51 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:32.802 20:42:51 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:25:32.802 20:42:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:32.802 20:42:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:32.802 20:42:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.802 20:42:51 -- nvmf/common.sh@469 -- # nvmfpid=3631761 00:25:32.802 20:42:51 -- nvmf/common.sh@470 -- # waitforlisten 3631761 00:25:32.802 20:42:51 -- common/autotest_common.sh@819 -- # '[' -z 3631761 ']' 00:25:32.802 20:42:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.802 20:42:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:32.802 20:42:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.802 20:42:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:32.802 20:42:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.802 20:42:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:33.059 [2024-04-26 20:42:51.168782] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:33.059 [2024-04-26 20:42:51.168887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.059 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.059 [2024-04-26 20:42:51.287248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.059 [2024-04-26 20:42:51.382928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:33.059 [2024-04-26 20:42:51.383113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.059 [2024-04-26 20:42:51.383127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.059 [2024-04-26 20:42:51.383137] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.059 [2024-04-26 20:42:51.383168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.626 20:42:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:33.626 20:42:51 -- common/autotest_common.sh@852 -- # return 0 00:25:33.626 20:42:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:33.626 20:42:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:33.626 20:42:51 -- common/autotest_common.sh@10 -- # set +x 00:25:33.626 20:42:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.626 20:42:51 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:33.626 20:42:51 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:33.626 20:42:51 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:33.887 [2024-04-26 20:42:52.030225] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.887 20:42:52 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:33.887 20:42:52 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:34.147 [2024-04-26 20:42:52.330373] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:34.147 [2024-04-26 20:42:52.330674] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.147 20:42:52 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:34.408 malloc0 00:25:34.408 20:42:52 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:34.408 20:42:52 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:34.746 20:42:52 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:34.746 20:42:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:34.746 20:42:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:34.746 20:42:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:34.746 20:42:52 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:25:34.746 20:42:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:34.746 20:42:52 -- target/tls.sh@28 -- # bdevperf_pid=3632137 00:25:34.746 20:42:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:34.746 20:42:52 -- target/tls.sh@31 -- # waitforlisten 3632137 /var/tmp/bdevperf.sock 00:25:34.746 20:42:52 -- common/autotest_common.sh@819 -- # '[' -z 3632137 ']' 00:25:34.746 20:42:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.746 20:42:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:34.746 20:42:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.746 20:42:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:34.746 20:42:52 -- common/autotest_common.sh@10 -- # set +x 00:25:34.746 20:42:52 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:34.746 [2024-04-26 20:42:52.918922] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:34.746 [2024-04-26 20:42:52.919070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632137 ] 00:25:34.746 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.746 [2024-04-26 20:42:53.048538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.007 [2024-04-26 20:42:53.143804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.578 20:42:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:35.578 20:42:53 -- common/autotest_common.sh@852 -- # return 0 00:25:35.578 20:42:53 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:35.578 [2024-04-26 20:42:53.768315] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:35.578 TLSTESTn1 00:25:35.578 20:42:53 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:35.836 Running I/O for 10 seconds... 00:25:45.822 00:25:45.822 Latency(us) 00:25:45.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.822 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:45.822 Verification LBA range: start 0x0 length 0x2000 00:25:45.822 TLSTESTn1 : 10.01 4852.85 18.96 0.00 0.00 26351.02 3949.41 53532.56 00:25:45.822 =================================================================================================================== 00:25:45.822 Total : 4852.85 18.96 0.00 0.00 26351.02 3949.41 53532.56 00:25:45.822 0 00:25:45.822 20:43:03 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:45.822 20:43:03 -- target/tls.sh@45 -- # killprocess 3632137 00:25:45.822 20:43:03 -- common/autotest_common.sh@926 -- # '[' -z 3632137 ']' 00:25:45.822 20:43:03 -- common/autotest_common.sh@930 -- # kill -0 3632137 00:25:45.822 20:43:03 -- common/autotest_common.sh@931 -- # uname 00:25:45.822 20:43:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:45.822 20:43:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3632137 00:25:45.822 20:43:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:45.822 20:43:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:45.822 20:43:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3632137' 00:25:45.822 killing process with pid 3632137 00:25:45.822 20:43:04 -- common/autotest_common.sh@945 -- # kill 3632137 00:25:45.822 Received shutdown signal, test time was about 10.000000 seconds 00:25:45.822 00:25:45.822 Latency(us) 00:25:45.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.822 =================================================================================================================== 00:25:45.822 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.822 20:43:04 -- common/autotest_common.sh@950 -- # wait 3632137 00:25:46.083 20:43:04 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:46.083 20:43:04 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:46.083 20:43:04 -- common/autotest_common.sh@640 -- # local es=0 00:25:46.083 20:43:04 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:46.084 20:43:04 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:46.084 20:43:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:46.084 20:43:04 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:46.084 20:43:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:46.084 20:43:04 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:46.084 20:43:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:46.084 20:43:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:46.084 20:43:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:46.084 20:43:04 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:25:46.084 20:43:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:46.084 20:43:04 -- target/tls.sh@28 -- # bdevperf_pid=3634263 00:25:46.084 20:43:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:46.084 20:43:04 -- target/tls.sh@31 -- # waitforlisten 3634263 /var/tmp/bdevperf.sock 00:25:46.084 20:43:04 -- common/autotest_common.sh@819 -- # '[' -z 3634263 ']' 00:25:46.084 20:43:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.084 20:43:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:46.084 20:43:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.084 20:43:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:46.084 20:43:04 -- common/autotest_common.sh@10 -- # set +x 00:25:46.084 20:43:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:46.344 [2024-04-26 20:43:04.489050] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:46.344 [2024-04-26 20:43:04.489205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634263 ] 00:25:46.344 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.344 [2024-04-26 20:43:04.618708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.604 [2024-04-26 20:43:04.714500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.862 20:43:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:46.862 20:43:05 -- common/autotest_common.sh@852 -- # return 0 00:25:46.862 20:43:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:47.140 [2024-04-26 20:43:05.318465] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.140 [2024-04-26 20:43:05.318515] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:47.140 request: 00:25:47.140 { 00:25:47.140 "name": "TLSTEST", 00:25:47.140 "trtype": "tcp", 00:25:47.140 "traddr": "10.0.0.2", 00:25:47.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.140 "adrfam": "ipv4", 00:25:47.140 "trsvcid": "4420", 00:25:47.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.140 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:47.140 "method": "bdev_nvme_attach_controller", 00:25:47.140 "req_id": 1 00:25:47.140 } 00:25:47.140 Got JSON-RPC error response 00:25:47.140 response: 00:25:47.140 { 00:25:47.140 "code": -22, 00:25:47.140 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:47.140 } 00:25:47.140 20:43:05 -- target/tls.sh@36 -- # killprocess 3634263 00:25:47.140 20:43:05 -- common/autotest_common.sh@926 -- # '[' -z 3634263 ']' 00:25:47.140 20:43:05 -- common/autotest_common.sh@930 -- # kill -0 3634263 00:25:47.140 20:43:05 -- common/autotest_common.sh@931 -- # uname 00:25:47.140 20:43:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:47.140 20:43:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3634263 00:25:47.140 20:43:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:47.140 20:43:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:47.140 20:43:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3634263' 00:25:47.140 killing process with pid 3634263 00:25:47.140 20:43:05 -- common/autotest_common.sh@945 -- # kill 3634263 00:25:47.140 Received shutdown signal, test time was about 10.000000 seconds 00:25:47.140 00:25:47.140 Latency(us) 00:25:47.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.140 =================================================================================================================== 00:25:47.140 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:47.140 20:43:05 -- common/autotest_common.sh@950 -- # wait 3634263 00:25:47.397 20:43:05 -- target/tls.sh@37 -- # return 1 00:25:47.397 20:43:05 -- common/autotest_common.sh@643 -- # es=1 00:25:47.397 20:43:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:47.397 20:43:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:47.397 20:43:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:47.397 20:43:05 -- target/tls.sh@183 -- # killprocess 3631761 00:25:47.397 20:43:05 -- common/autotest_common.sh@926 -- # '[' -z 3631761 ']' 00:25:47.397 20:43:05 -- common/autotest_common.sh@930 -- # kill -0 3631761 00:25:47.397 20:43:05 -- common/autotest_common.sh@931 -- # uname 00:25:47.397 20:43:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:47.397 20:43:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3631761 00:25:47.653 20:43:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:47.654 20:43:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:47.654 20:43:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3631761' 00:25:47.654 killing process with pid 3631761 00:25:47.654 20:43:05 -- common/autotest_common.sh@945 -- # kill 3631761 00:25:47.654 20:43:05 -- common/autotest_common.sh@950 -- # wait 3631761 00:25:48.221 20:43:06 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:25:48.221 20:43:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:48.221 20:43:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:48.221 20:43:06 -- common/autotest_common.sh@10 -- # set +x 00:25:48.221 20:43:06 -- nvmf/common.sh@469 -- # nvmfpid=3634583 00:25:48.221 20:43:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:48.221 20:43:06 -- nvmf/common.sh@470 -- # waitforlisten 3634583 00:25:48.221 20:43:06 -- common/autotest_common.sh@819 -- # '[' -z 3634583 ']' 00:25:48.221 20:43:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.221 20:43:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:48.221 20:43:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.221 20:43:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:48.221 20:43:06 -- common/autotest_common.sh@10 -- # set +x 00:25:48.221 [2024-04-26 20:43:06.360596] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:48.221 [2024-04-26 20:43:06.360729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.221 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.221 [2024-04-26 20:43:06.510695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.479 [2024-04-26 20:43:06.609533] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:48.479 [2024-04-26 20:43:06.609749] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.479 [2024-04-26 20:43:06.609764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.479 [2024-04-26 20:43:06.609775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.479 [2024-04-26 20:43:06.609814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.739 20:43:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:48.739 20:43:07 -- common/autotest_common.sh@852 -- # return 0 00:25:48.739 20:43:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:48.739 20:43:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:48.739 20:43:07 -- common/autotest_common.sh@10 -- # set +x 00:25:48.998 20:43:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.998 20:43:07 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:48.998 20:43:07 -- common/autotest_common.sh@640 -- # local es=0 00:25:48.998 20:43:07 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:48.998 20:43:07 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:25:48.998 20:43:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.998 20:43:07 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:25:48.998 20:43:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.998 20:43:07 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:48.998 20:43:07 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:48.998 20:43:07 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:48.998 [2024-04-26 20:43:07.227276] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.998 20:43:07 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:49.256 20:43:07 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:49.256 [2024-04-26 20:43:07.475309] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:49.256 [2024-04-26 20:43:07.475538] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.256 20:43:07 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:49.512 malloc0 00:25:49.512 20:43:07 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:49.512 20:43:07 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:49.769 [2024-04-26 20:43:07.874224] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:49.769 [2024-04-26 20:43:07.874258] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:25:49.770 [2024-04-26 20:43:07.874277] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:25:49.770 request: 00:25:49.770 { 00:25:49.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.770 "host": "nqn.2016-06.io.spdk:host1", 00:25:49.770 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:49.770 "method": "nvmf_subsystem_add_host", 00:25:49.770 "req_id": 1 00:25:49.770 } 00:25:49.770 Got JSON-RPC error response 00:25:49.770 response: 00:25:49.770 { 00:25:49.770 "code": -32603, 00:25:49.770 "message": "Internal error" 00:25:49.770 } 00:25:49.770 20:43:07 -- common/autotest_common.sh@643 -- # es=1 00:25:49.770 20:43:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:49.770 20:43:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:49.770 20:43:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:49.770 20:43:07 -- target/tls.sh@189 -- # killprocess 3634583 00:25:49.770 20:43:07 -- common/autotest_common.sh@926 -- # '[' -z 3634583 ']' 00:25:49.770 20:43:07 -- common/autotest_common.sh@930 -- # kill -0 3634583 00:25:49.770 20:43:07 -- common/autotest_common.sh@931 -- # uname 00:25:49.770 20:43:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:49.770 20:43:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3634583 00:25:49.770 20:43:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:49.770 20:43:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:49.770 20:43:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3634583' 00:25:49.770 killing process with pid 3634583 00:25:49.770 20:43:07 -- common/autotest_common.sh@945 -- # kill 3634583 00:25:49.770 20:43:07 -- common/autotest_common.sh@950 -- # wait 3634583 00:25:50.340 20:43:08 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:50.340 20:43:08 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:25:50.340 20:43:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:50.340 20:43:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:50.340 20:43:08 -- common/autotest_common.sh@10 -- # set +x 00:25:50.340 20:43:08 -- nvmf/common.sh@469 -- # nvmfpid=3635206 00:25:50.340 20:43:08 -- nvmf/common.sh@470 -- # waitforlisten 3635206 00:25:50.340 20:43:08 -- common/autotest_common.sh@819 -- # '[' -z 3635206 ']' 00:25:50.340 20:43:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.340 20:43:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:50.340 20:43:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.340 20:43:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:50.340 20:43:08 -- common/autotest_common.sh@10 -- # set +x 00:25:50.340 20:43:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:50.340 [2024-04-26 20:43:08.532489] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:50.340 [2024-04-26 20:43:08.532613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.340 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.340 [2024-04-26 20:43:08.659536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.601 [2024-04-26 20:43:08.756073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:50.601 [2024-04-26 20:43:08.756254] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.601 [2024-04-26 20:43:08.756269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.601 [2024-04-26 20:43:08.756278] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.601 [2024-04-26 20:43:08.756305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.169 20:43:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:51.169 20:43:09 -- common/autotest_common.sh@852 -- # return 0 00:25:51.169 20:43:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:51.169 20:43:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:51.169 20:43:09 -- common/autotest_common.sh@10 -- # set +x 00:25:51.169 20:43:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.169 20:43:09 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:51.169 20:43:09 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:51.169 20:43:09 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:51.169 [2024-04-26 20:43:09.358432] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.169 20:43:09 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:51.169 20:43:09 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:51.426 [2024-04-26 20:43:09.626496] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:51.426 [2024-04-26 20:43:09.626738] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.426 20:43:09 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:51.684 malloc0 00:25:51.684 20:43:09 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:51.684 20:43:09 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:51.945 20:43:10 -- target/tls.sh@197 -- # bdevperf_pid=3635542 00:25:51.945 20:43:10 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:51.945 20:43:10 -- target/tls.sh@200 -- # waitforlisten 3635542 /var/tmp/bdevperf.sock 00:25:51.945 20:43:10 -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:51.945 20:43:10 -- common/autotest_common.sh@819 -- # '[' -z 3635542 ']' 00:25:51.945 20:43:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.945 20:43:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:51.945 20:43:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.945 20:43:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:51.945 20:43:10 -- common/autotest_common.sh@10 -- # set +x 00:25:51.945 [2024-04-26 20:43:10.159373] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:51.945 [2024-04-26 20:43:10.159568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635542 ] 00:25:51.945 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.945 [2024-04-26 20:43:10.280117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.206 [2024-04-26 20:43:10.375657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.779 20:43:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.779 20:43:10 -- common/autotest_common.sh@852 -- # return 0 00:25:52.779 20:43:10 -- target/tls.sh@201 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:52.779 [2024-04-26 20:43:10.989529] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:52.779 TLSTESTn1 00:25:52.779 20:43:11 -- target/tls.sh@205 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:25:53.038 20:43:11 -- target/tls.sh@205 -- # tgtconf='{ 00:25:53.038 "subsystems": [ 00:25:53.038 { 00:25:53.038 "subsystem": "iobuf", 00:25:53.038 "config": [ 00:25:53.038 { 00:25:53.038 "method": "iobuf_set_options", 00:25:53.038 "params": { 00:25:53.038 "small_pool_count": 8192, 00:25:53.038 "large_pool_count": 1024, 00:25:53.038 "small_bufsize": 8192, 00:25:53.038 "large_bufsize": 135168 00:25:53.038 } 00:25:53.038 } 00:25:53.038 ] 00:25:53.038 }, 00:25:53.038 { 00:25:53.038 "subsystem": "sock", 00:25:53.038 "config": [ 00:25:53.038 { 00:25:53.038 "method": "sock_impl_set_options", 00:25:53.038 "params": { 00:25:53.038 "impl_name": "posix", 00:25:53.038 "recv_buf_size": 2097152, 00:25:53.038 "send_buf_size": 2097152, 00:25:53.038 "enable_recv_pipe": true, 00:25:53.038 "enable_quickack": false, 00:25:53.038 "enable_placement_id": 0, 00:25:53.038 "enable_zerocopy_send_server": true, 00:25:53.038 "enable_zerocopy_send_client": false, 00:25:53.038 "zerocopy_threshold": 0, 00:25:53.038 "tls_version": 0, 00:25:53.038 "enable_ktls": false 00:25:53.038 } 00:25:53.038 }, 00:25:53.038 { 00:25:53.038 "method": "sock_impl_set_options", 00:25:53.038 "params": { 00:25:53.038 "impl_name": "ssl", 00:25:53.038 "recv_buf_size": 4096, 00:25:53.038 "send_buf_size": 4096, 00:25:53.038 "enable_recv_pipe": true, 00:25:53.038 "enable_quickack": false, 00:25:53.038 "enable_placement_id": 0, 00:25:53.038 "enable_zerocopy_send_server": true, 00:25:53.038 "enable_zerocopy_send_client": false, 00:25:53.038 "zerocopy_threshold": 0, 00:25:53.038 "tls_version": 0, 00:25:53.038 "enable_ktls": false 00:25:53.038 } 00:25:53.038 } 00:25:53.038 ] 00:25:53.038 }, 00:25:53.038 { 00:25:53.038 "subsystem": "vmd", 00:25:53.038 "config": [] 00:25:53.038 }, 00:25:53.038 { 00:25:53.038 "subsystem": "accel", 00:25:53.038 "config": [ 00:25:53.038 { 00:25:53.038 "method": "accel_set_options", 00:25:53.038 "params": { 00:25:53.038 "small_cache_size": 128, 00:25:53.038 "large_cache_size": 16, 00:25:53.038 "task_count": 2048, 00:25:53.038 "sequence_count": 2048, 00:25:53.038 "buf_count": 2048 00:25:53.038 } 00:25:53.038 } 00:25:53.038 ] 00:25:53.038 }, 00:25:53.038 { 00:25:53.038 "subsystem": "bdev", 00:25:53.038 "config": [ 00:25:53.038 { 00:25:53.038 "method": "bdev_set_options", 00:25:53.038 "params": { 00:25:53.038 "bdev_io_pool_size": 65535, 00:25:53.038 "bdev_io_cache_size": 256, 00:25:53.038 "bdev_auto_examine": true, 00:25:53.038 "iobuf_small_cache_size": 128, 00:25:53.038 "iobuf_large_cache_size": 16 00:25:53.038 } 00:25:53.038 }, 00:25:53.038 { 00:25:53.038 "method": "bdev_raid_set_options", 00:25:53.038 "params": { 00:25:53.038 "process_window_size_kb": 1024 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "bdev_iscsi_set_options", 00:25:53.039 "params": { 00:25:53.039 "timeout_sec": 30 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "bdev_nvme_set_options", 00:25:53.039 "params": { 00:25:53.039 "action_on_timeout": "none", 00:25:53.039 "timeout_us": 0, 00:25:53.039 "timeout_admin_us": 0, 00:25:53.039 "keep_alive_timeout_ms": 10000, 00:25:53.039 "transport_retry_count": 4, 00:25:53.039 "arbitration_burst": 0, 00:25:53.039 "low_priority_weight": 0, 00:25:53.039 "medium_priority_weight": 0, 00:25:53.039 "high_priority_weight": 0, 00:25:53.039 "nvme_adminq_poll_period_us": 10000, 00:25:53.039 "nvme_ioq_poll_period_us": 0, 00:25:53.039 "io_queue_requests": 0, 00:25:53.039 "delay_cmd_submit": true, 00:25:53.039 "bdev_retry_count": 3, 00:25:53.039 "transport_ack_timeout": 0, 00:25:53.039 "ctrlr_loss_timeout_sec": 0, 00:25:53.039 "reconnect_delay_sec": 0, 00:25:53.039 "fast_io_fail_timeout_sec": 0, 00:25:53.039 "generate_uuids": false, 00:25:53.039 "transport_tos": 0, 00:25:53.039 "io_path_stat": false, 00:25:53.039 "allow_accel_sequence": false 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "bdev_nvme_set_hotplug", 00:25:53.039 "params": { 00:25:53.039 "period_us": 100000, 00:25:53.039 "enable": false 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "bdev_malloc_create", 00:25:53.039 "params": { 00:25:53.039 "name": "malloc0", 00:25:53.039 "num_blocks": 8192, 00:25:53.039 "block_size": 4096, 00:25:53.039 "physical_block_size": 4096, 00:25:53.039 "uuid": "0bd5c6cd-ab57-4629-883a-ba996aeae49e", 00:25:53.039 "optimal_io_boundary": 0 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "bdev_wait_for_examine" 00:25:53.039 } 00:25:53.039 ] 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "subsystem": "nbd", 00:25:53.039 "config": [] 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "subsystem": "scheduler", 00:25:53.039 "config": [ 00:25:53.039 { 00:25:53.039 "method": "framework_set_scheduler", 00:25:53.039 "params": { 00:25:53.039 "name": "static" 00:25:53.039 } 00:25:53.039 } 00:25:53.039 ] 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "subsystem": "nvmf", 00:25:53.039 "config": [ 00:25:53.039 { 00:25:53.039 "method": "nvmf_set_config", 00:25:53.039 "params": { 00:25:53.039 "discovery_filter": "match_any", 00:25:53.039 "admin_cmd_passthru": { 00:25:53.039 "identify_ctrlr": false 00:25:53.039 } 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "nvmf_set_max_subsystems", 00:25:53.039 "params": { 00:25:53.039 "max_subsystems": 1024 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "nvmf_set_crdt", 00:25:53.039 "params": { 00:25:53.039 "crdt1": 0, 00:25:53.039 "crdt2": 0, 00:25:53.039 "crdt3": 0 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "nvmf_create_transport", 00:25:53.039 "params": { 00:25:53.039 "trtype": "TCP", 00:25:53.039 "max_queue_depth": 128, 00:25:53.039 "max_io_qpairs_per_ctrlr": 127, 00:25:53.039 "in_capsule_data_size": 4096, 00:25:53.039 "max_io_size": 131072, 00:25:53.039 "io_unit_size": 131072, 00:25:53.039 "max_aq_depth": 128, 00:25:53.039 "num_shared_buffers": 511, 00:25:53.039 "buf_cache_size": 4294967295, 00:25:53.039 "dif_insert_or_strip": false, 00:25:53.039 "zcopy": false, 00:25:53.039 "c2h_success": false, 00:25:53.039 "sock_priority": 0, 00:25:53.039 "abort_timeout_sec": 1 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "nvmf_create_subsystem", 00:25:53.039 "params": { 00:25:53.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.039 "allow_any_host": false, 00:25:53.039 "serial_number": "SPDK00000000000001", 00:25:53.039 "model_number": "SPDK bdev Controller", 00:25:53.039 "max_namespaces": 10, 00:25:53.039 "min_cntlid": 1, 00:25:53.039 "max_cntlid": 65519, 00:25:53.039 "ana_reporting": false 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "nvmf_subsystem_add_host", 00:25:53.039 "params": { 00:25:53.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.039 "host": "nqn.2016-06.io.spdk:host1", 00:25:53.039 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "nvmf_subsystem_add_ns", 00:25:53.039 "params": { 00:25:53.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.039 "namespace": { 00:25:53.039 "nsid": 1, 00:25:53.039 "bdev_name": "malloc0", 00:25:53.039 "nguid": "0BD5C6CDAB574629883ABA996AEAE49E", 00:25:53.039 "uuid": "0bd5c6cd-ab57-4629-883a-ba996aeae49e" 00:25:53.039 } 00:25:53.039 } 00:25:53.039 }, 00:25:53.039 { 00:25:53.039 "method": "nvmf_subsystem_add_listener", 00:25:53.039 "params": { 00:25:53.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.039 "listen_address": { 00:25:53.039 "trtype": "TCP", 00:25:53.039 "adrfam": "IPv4", 00:25:53.039 "traddr": "10.0.0.2", 00:25:53.039 "trsvcid": "4420" 00:25:53.039 }, 00:25:53.039 "secure_channel": true 00:25:53.039 } 00:25:53.039 } 00:25:53.039 ] 00:25:53.039 } 00:25:53.039 ] 00:25:53.039 }' 00:25:53.039 20:43:11 -- target/tls.sh@206 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:53.299 20:43:11 -- target/tls.sh@206 -- # bdevperfconf='{ 00:25:53.299 "subsystems": [ 00:25:53.299 { 00:25:53.299 "subsystem": "iobuf", 00:25:53.299 "config": [ 00:25:53.299 { 00:25:53.299 "method": "iobuf_set_options", 00:25:53.299 "params": { 00:25:53.299 "small_pool_count": 8192, 00:25:53.299 "large_pool_count": 1024, 00:25:53.299 "small_bufsize": 8192, 00:25:53.299 "large_bufsize": 135168 00:25:53.299 } 00:25:53.299 } 00:25:53.299 ] 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "subsystem": "sock", 00:25:53.299 "config": [ 00:25:53.299 { 00:25:53.299 "method": "sock_impl_set_options", 00:25:53.299 "params": { 00:25:53.299 "impl_name": "posix", 00:25:53.299 "recv_buf_size": 2097152, 00:25:53.299 "send_buf_size": 2097152, 00:25:53.299 "enable_recv_pipe": true, 00:25:53.299 "enable_quickack": false, 00:25:53.299 "enable_placement_id": 0, 00:25:53.299 "enable_zerocopy_send_server": true, 00:25:53.299 "enable_zerocopy_send_client": false, 00:25:53.299 "zerocopy_threshold": 0, 00:25:53.299 "tls_version": 0, 00:25:53.299 "enable_ktls": false 00:25:53.299 } 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "method": "sock_impl_set_options", 00:25:53.299 "params": { 00:25:53.299 "impl_name": "ssl", 00:25:53.299 "recv_buf_size": 4096, 00:25:53.299 "send_buf_size": 4096, 00:25:53.299 "enable_recv_pipe": true, 00:25:53.299 "enable_quickack": false, 00:25:53.299 "enable_placement_id": 0, 00:25:53.299 "enable_zerocopy_send_server": true, 00:25:53.299 "enable_zerocopy_send_client": false, 00:25:53.299 "zerocopy_threshold": 0, 00:25:53.299 "tls_version": 0, 00:25:53.299 "enable_ktls": false 00:25:53.299 } 00:25:53.299 } 00:25:53.299 ] 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "subsystem": "vmd", 00:25:53.299 "config": [] 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "subsystem": "accel", 00:25:53.299 "config": [ 00:25:53.299 { 00:25:53.299 "method": "accel_set_options", 00:25:53.299 "params": { 00:25:53.299 "small_cache_size": 128, 00:25:53.299 "large_cache_size": 16, 00:25:53.299 "task_count": 2048, 00:25:53.299 "sequence_count": 2048, 00:25:53.299 "buf_count": 2048 00:25:53.299 } 00:25:53.299 } 00:25:53.299 ] 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "subsystem": "bdev", 00:25:53.299 "config": [ 00:25:53.299 { 00:25:53.299 "method": "bdev_set_options", 00:25:53.299 "params": { 00:25:53.299 "bdev_io_pool_size": 65535, 00:25:53.299 "bdev_io_cache_size": 256, 00:25:53.299 "bdev_auto_examine": true, 00:25:53.299 "iobuf_small_cache_size": 128, 00:25:53.299 "iobuf_large_cache_size": 16 00:25:53.299 } 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "method": "bdev_raid_set_options", 00:25:53.299 "params": { 00:25:53.299 "process_window_size_kb": 1024 00:25:53.299 } 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "method": "bdev_iscsi_set_options", 00:25:53.299 "params": { 00:25:53.299 "timeout_sec": 30 00:25:53.299 } 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "method": "bdev_nvme_set_options", 00:25:53.299 "params": { 00:25:53.299 "action_on_timeout": "none", 00:25:53.299 "timeout_us": 0, 00:25:53.299 "timeout_admin_us": 0, 00:25:53.299 "keep_alive_timeout_ms": 10000, 00:25:53.299 "transport_retry_count": 4, 00:25:53.299 "arbitration_burst": 0, 00:25:53.299 "low_priority_weight": 0, 00:25:53.299 "medium_priority_weight": 0, 00:25:53.299 "high_priority_weight": 0, 00:25:53.299 "nvme_adminq_poll_period_us": 10000, 00:25:53.299 "nvme_ioq_poll_period_us": 0, 00:25:53.299 "io_queue_requests": 512, 00:25:53.299 "delay_cmd_submit": true, 00:25:53.299 "bdev_retry_count": 3, 00:25:53.299 "transport_ack_timeout": 0, 00:25:53.299 "ctrlr_loss_timeout_sec": 0, 00:25:53.299 "reconnect_delay_sec": 0, 00:25:53.299 "fast_io_fail_timeout_sec": 0, 00:25:53.299 "generate_uuids": false, 00:25:53.299 "transport_tos": 0, 00:25:53.299 "io_path_stat": false, 00:25:53.299 "allow_accel_sequence": false 00:25:53.299 } 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "method": "bdev_nvme_attach_controller", 00:25:53.299 "params": { 00:25:53.299 "name": "TLSTEST", 00:25:53.299 "trtype": "TCP", 00:25:53.299 "adrfam": "IPv4", 00:25:53.299 "traddr": "10.0.0.2", 00:25:53.299 "trsvcid": "4420", 00:25:53.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.299 "prchk_reftag": false, 00:25:53.299 "prchk_guard": false, 00:25:53.299 "ctrlr_loss_timeout_sec": 0, 00:25:53.299 "reconnect_delay_sec": 0, 00:25:53.299 "fast_io_fail_timeout_sec": 0, 00:25:53.299 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:53.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.299 "hdgst": false, 00:25:53.299 "ddgst": false 00:25:53.299 } 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "method": "bdev_nvme_set_hotplug", 00:25:53.299 "params": { 00:25:53.299 "period_us": 100000, 00:25:53.299 "enable": false 00:25:53.299 } 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "method": "bdev_wait_for_examine" 00:25:53.299 } 00:25:53.299 ] 00:25:53.299 }, 00:25:53.299 { 00:25:53.299 "subsystem": "nbd", 00:25:53.299 "config": [] 00:25:53.299 } 00:25:53.299 ] 00:25:53.299 }' 00:25:53.299 20:43:11 -- target/tls.sh@208 -- # killprocess 3635542 00:25:53.299 20:43:11 -- common/autotest_common.sh@926 -- # '[' -z 3635542 ']' 00:25:53.299 20:43:11 -- common/autotest_common.sh@930 -- # kill -0 3635542 00:25:53.299 20:43:11 -- common/autotest_common.sh@931 -- # uname 00:25:53.299 20:43:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:53.299 20:43:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3635542 00:25:53.299 20:43:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:53.299 20:43:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:53.299 20:43:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3635542' 00:25:53.299 killing process with pid 3635542 00:25:53.299 20:43:11 -- common/autotest_common.sh@945 -- # kill 3635542 00:25:53.299 Received shutdown signal, test time was about 10.000000 seconds 00:25:53.299 00:25:53.299 Latency(us) 00:25:53.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.299 =================================================================================================================== 00:25:53.299 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:53.299 20:43:11 -- common/autotest_common.sh@950 -- # wait 3635542 00:25:53.558 20:43:11 -- target/tls.sh@209 -- # killprocess 3635206 00:25:53.558 20:43:11 -- common/autotest_common.sh@926 -- # '[' -z 3635206 ']' 00:25:53.558 20:43:11 -- common/autotest_common.sh@930 -- # kill -0 3635206 00:25:53.558 20:43:11 -- common/autotest_common.sh@931 -- # uname 00:25:53.558 20:43:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:53.558 20:43:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3635206 00:25:53.817 20:43:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:53.817 20:43:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:53.817 20:43:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3635206' 00:25:53.817 killing process with pid 3635206 00:25:53.817 20:43:11 -- common/autotest_common.sh@945 -- # kill 3635206 00:25:53.817 20:43:11 -- common/autotest_common.sh@950 -- # wait 3635206 00:25:54.080 20:43:12 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:54.080 20:43:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:54.080 20:43:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:54.080 20:43:12 -- common/autotest_common.sh@10 -- # set +x 00:25:54.080 20:43:12 -- target/tls.sh@212 -- # echo '{ 00:25:54.080 "subsystems": [ 00:25:54.080 { 00:25:54.080 "subsystem": "iobuf", 00:25:54.080 "config": [ 00:25:54.080 { 00:25:54.080 "method": "iobuf_set_options", 00:25:54.080 "params": { 00:25:54.080 "small_pool_count": 8192, 00:25:54.080 "large_pool_count": 1024, 00:25:54.080 "small_bufsize": 8192, 00:25:54.080 "large_bufsize": 135168 00:25:54.080 } 00:25:54.080 } 00:25:54.080 ] 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "subsystem": "sock", 00:25:54.080 "config": [ 00:25:54.080 { 00:25:54.080 "method": "sock_impl_set_options", 00:25:54.080 "params": { 00:25:54.080 "impl_name": "posix", 00:25:54.080 "recv_buf_size": 2097152, 00:25:54.080 "send_buf_size": 2097152, 00:25:54.080 "enable_recv_pipe": true, 00:25:54.080 "enable_quickack": false, 00:25:54.080 "enable_placement_id": 0, 00:25:54.080 "enable_zerocopy_send_server": true, 00:25:54.080 "enable_zerocopy_send_client": false, 00:25:54.080 "zerocopy_threshold": 0, 00:25:54.080 "tls_version": 0, 00:25:54.080 "enable_ktls": false 00:25:54.080 } 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "method": "sock_impl_set_options", 00:25:54.080 "params": { 00:25:54.080 "impl_name": "ssl", 00:25:54.080 "recv_buf_size": 4096, 00:25:54.080 "send_buf_size": 4096, 00:25:54.080 "enable_recv_pipe": true, 00:25:54.080 "enable_quickack": false, 00:25:54.080 "enable_placement_id": 0, 00:25:54.080 "enable_zerocopy_send_server": true, 00:25:54.080 "enable_zerocopy_send_client": false, 00:25:54.080 "zerocopy_threshold": 0, 00:25:54.080 "tls_version": 0, 00:25:54.080 "enable_ktls": false 00:25:54.080 } 00:25:54.080 } 00:25:54.080 ] 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "subsystem": "vmd", 00:25:54.080 "config": [] 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "subsystem": "accel", 00:25:54.080 "config": [ 00:25:54.080 { 00:25:54.080 "method": "accel_set_options", 00:25:54.080 "params": { 00:25:54.080 "small_cache_size": 128, 00:25:54.080 "large_cache_size": 16, 00:25:54.080 "task_count": 2048, 00:25:54.080 "sequence_count": 2048, 00:25:54.080 "buf_count": 2048 00:25:54.080 } 00:25:54.080 } 00:25:54.080 ] 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "subsystem": "bdev", 00:25:54.080 "config": [ 00:25:54.080 { 00:25:54.080 "method": "bdev_set_options", 00:25:54.080 "params": { 00:25:54.080 "bdev_io_pool_size": 65535, 00:25:54.080 "bdev_io_cache_size": 256, 00:25:54.080 "bdev_auto_examine": true, 00:25:54.080 "iobuf_small_cache_size": 128, 00:25:54.080 "iobuf_large_cache_size": 16 00:25:54.080 } 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "method": "bdev_raid_set_options", 00:25:54.080 "params": { 00:25:54.080 "process_window_size_kb": 1024 00:25:54.080 } 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "method": "bdev_iscsi_set_options", 00:25:54.080 "params": { 00:25:54.080 "timeout_sec": 30 00:25:54.080 } 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "method": "bdev_nvme_set_options", 00:25:54.080 "params": { 00:25:54.080 "action_on_timeout": "none", 00:25:54.080 "timeout_us": 0, 00:25:54.080 "timeout_admin_us": 0, 00:25:54.080 "keep_alive_timeout_ms": 10000, 00:25:54.080 "transport_retry_count": 4, 00:25:54.080 "arbitration_burst": 0, 00:25:54.080 "low_priority_weight": 0, 00:25:54.080 "medium_priority_weight": 0, 00:25:54.080 "high_priority_weight": 0, 00:25:54.080 "nvme_adminq_poll_period_us": 10000, 00:25:54.080 "nvme_ioq_poll_period_us": 0, 00:25:54.080 "io_queue_requests": 0, 00:25:54.080 "delay_cmd_submit": true, 00:25:54.080 "bdev_retry_count": 3, 00:25:54.080 "transport_ack_timeout": 0, 00:25:54.080 "ctrlr_loss_timeout_sec": 0, 00:25:54.080 "reconnect_delay_sec": 0, 00:25:54.080 "fast_io_fail_timeout_sec": 0, 00:25:54.080 "generate_uuids": false, 00:25:54.080 "transport_tos": 0, 00:25:54.080 "io_path_stat": false, 00:25:54.080 "allow_accel_sequence": false 00:25:54.080 } 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "method": "bdev_nvme_set_hotplug", 00:25:54.080 "params": { 00:25:54.080 "period_us": 100000, 00:25:54.080 "enable": false 00:25:54.080 } 00:25:54.080 }, 00:25:54.080 { 00:25:54.080 "method": "bdev_malloc_create", 00:25:54.080 "params": { 00:25:54.080 "name": "malloc0", 00:25:54.080 "num_blocks": 8192, 00:25:54.080 "block_size": 4096, 00:25:54.080 "physical_block_size": 4096, 00:25:54.081 "uuid": "0bd5c6cd-ab57-4629-883a-ba996aeae49e", 00:25:54.081 "optimal_io_boundary": 0 00:25:54.081 } 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "method": "bdev_wait_for_examine" 00:25:54.081 } 00:25:54.081 ] 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "subsystem": "nbd", 00:25:54.081 "config": [] 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "subsystem": "scheduler", 00:25:54.081 "config": [ 00:25:54.081 { 00:25:54.081 "method": "framework_set_scheduler", 00:25:54.081 "params": { 00:25:54.081 "name": "static" 00:25:54.081 } 00:25:54.081 } 00:25:54.081 ] 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "subsystem": "nvmf", 00:25:54.081 "config": [ 00:25:54.081 { 00:25:54.081 "method": "nvmf_set_config", 00:25:54.081 "params": { 00:25:54.081 "discovery_filter": "match_any", 00:25:54.081 "admin_cmd_passthru": { 00:25:54.081 "identify_ctrlr": false 00:25:54.081 } 00:25:54.081 } 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "method": "nvmf_set_max_subsystems", 00:25:54.081 "params": { 00:25:54.081 "max_subsystems": 1024 00:25:54.081 } 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "method": "nvmf_set_crdt", 00:25:54.081 "params": { 00:25:54.081 "crdt1": 0, 00:25:54.081 "crdt2": 0, 00:25:54.081 "crdt3": 0 00:25:54.081 } 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "method": "nvmf_create_transport", 00:25:54.081 "params": { 00:25:54.081 "trtype": "TCP", 00:25:54.081 "max_queue_depth": 128, 00:25:54.081 "max_io_qpairs_per_ctrlr": 127, 00:25:54.081 "in_capsule_data_size": 4096, 00:25:54.081 "max_io_size": 131072, 00:25:54.081 "io_unit_size": 131072, 00:25:54.081 "max_aq_depth": 128, 00:25:54.081 "num_shared_buffers": 511, 00:25:54.081 "buf_cache_size": 4294967295, 00:25:54.081 "dif_insert_or_strip": false, 00:25:54.081 "zcopy": false, 00:25:54.081 "c2h_success": false, 00:25:54.081 "sock_priority": 0, 00:25:54.081 "abort_timeout_sec": 1 00:25:54.081 } 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "method": "nvmf_create_subsystem", 00:25:54.081 "params": { 00:25:54.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.081 "allow_any_host": false, 00:25:54.081 "serial_number": "SPDK00000000000001", 00:25:54.081 "model_number": "SPDK bdev Controller", 00:25:54.081 "max_namespaces": 10, 00:25:54.081 "min_cntlid": 1, 00:25:54.081 "max_cntlid": 65519, 00:25:54.081 "ana_reporting": false 00:25:54.081 } 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "method": "nvmf_subsystem_add_host", 00:25:54.081 "params": { 00:25:54.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.081 "host": "nqn.2016-06.io.spdk:host1", 00:25:54.081 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:54.081 } 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "method": "nvmf_subsystem_add_ns", 00:25:54.081 "params": { 00:25:54.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.081 "namespace": { 00:25:54.081 "nsid": 1, 00:25:54.081 "bdev_name": "malloc0", 00:25:54.081 "nguid": "0BD5C6CDAB574629883ABA996AEAE49E", 00:25:54.081 "uuid": "0bd5c6cd-ab57-4629-883a-ba996aeae49e" 00:25:54.081 } 00:25:54.081 } 00:25:54.081 }, 00:25:54.081 { 00:25:54.081 "method": "nvmf_subsystem_add_listener", 00:25:54.081 "params": { 00:25:54.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.081 "listen_address": { 00:25:54.081 "trtype": "TCP", 00:25:54.081 "adrfam": "IPv4", 00:25:54.081 "traddr": "10.0.0.2", 00:25:54.081 "trsvcid": "4420" 00:25:54.081 }, 00:25:54.081 "secure_channel": true 00:25:54.081 } 00:25:54.081 } 00:25:54.081 ] 00:25:54.081 } 00:25:54.081 ] 00:25:54.081 }' 00:25:54.081 20:43:12 -- nvmf/common.sh@469 -- # nvmfpid=3635873 00:25:54.081 20:43:12 -- nvmf/common.sh@470 -- # waitforlisten 3635873 00:25:54.081 20:43:12 -- common/autotest_common.sh@819 -- # '[' -z 3635873 ']' 00:25:54.081 20:43:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.081 20:43:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:54.081 20:43:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.081 20:43:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:54.081 20:43:12 -- common/autotest_common.sh@10 -- # set +x 00:25:54.081 20:43:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:54.341 [2024-04-26 20:43:12.473080] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:54.341 [2024-04-26 20:43:12.473210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.341 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.341 [2024-04-26 20:43:12.602700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.601 [2024-04-26 20:43:12.700672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:54.601 [2024-04-26 20:43:12.700856] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.601 [2024-04-26 20:43:12.700872] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.601 [2024-04-26 20:43:12.700883] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.601 [2024-04-26 20:43:12.700922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.861 [2024-04-26 20:43:12.984390] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.861 [2024-04-26 20:43:13.036511] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:54.861 [2024-04-26 20:43:13.036788] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.861 20:43:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:54.861 20:43:13 -- common/autotest_common.sh@852 -- # return 0 00:25:54.861 20:43:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:54.861 20:43:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:54.861 20:43:13 -- common/autotest_common.sh@10 -- # set +x 00:25:54.861 20:43:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.861 20:43:13 -- target/tls.sh@216 -- # bdevperf_pid=3636189 00:25:54.861 20:43:13 -- target/tls.sh@217 -- # waitforlisten 3636189 /var/tmp/bdevperf.sock 00:25:54.861 20:43:13 -- common/autotest_common.sh@819 -- # '[' -z 3636189 ']' 00:25:54.861 20:43:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:54.861 20:43:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:54.861 20:43:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:54.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:54.861 20:43:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:54.861 20:43:13 -- common/autotest_common.sh@10 -- # set +x 00:25:54.861 20:43:13 -- target/tls.sh@213 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:54.861 20:43:13 -- target/tls.sh@213 -- # echo '{ 00:25:54.861 "subsystems": [ 00:25:54.861 { 00:25:54.861 "subsystem": "iobuf", 00:25:54.861 "config": [ 00:25:54.861 { 00:25:54.861 "method": "iobuf_set_options", 00:25:54.861 "params": { 00:25:54.861 "small_pool_count": 8192, 00:25:54.861 "large_pool_count": 1024, 00:25:54.861 "small_bufsize": 8192, 00:25:54.861 "large_bufsize": 135168 00:25:54.861 } 00:25:54.861 } 00:25:54.861 ] 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "subsystem": "sock", 00:25:54.861 "config": [ 00:25:54.861 { 00:25:54.861 "method": "sock_impl_set_options", 00:25:54.861 "params": { 00:25:54.861 "impl_name": "posix", 00:25:54.861 "recv_buf_size": 2097152, 00:25:54.861 "send_buf_size": 2097152, 00:25:54.861 "enable_recv_pipe": true, 00:25:54.861 "enable_quickack": false, 00:25:54.861 "enable_placement_id": 0, 00:25:54.861 "enable_zerocopy_send_server": true, 00:25:54.861 "enable_zerocopy_send_client": false, 00:25:54.861 "zerocopy_threshold": 0, 00:25:54.861 "tls_version": 0, 00:25:54.861 "enable_ktls": false 00:25:54.861 } 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "method": "sock_impl_set_options", 00:25:54.861 "params": { 00:25:54.861 "impl_name": "ssl", 00:25:54.861 "recv_buf_size": 4096, 00:25:54.861 "send_buf_size": 4096, 00:25:54.861 "enable_recv_pipe": true, 00:25:54.861 "enable_quickack": false, 00:25:54.861 "enable_placement_id": 0, 00:25:54.861 "enable_zerocopy_send_server": true, 00:25:54.861 "enable_zerocopy_send_client": false, 00:25:54.861 "zerocopy_threshold": 0, 00:25:54.861 "tls_version": 0, 00:25:54.861 "enable_ktls": false 00:25:54.861 } 00:25:54.861 } 00:25:54.861 ] 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "subsystem": "vmd", 00:25:54.861 "config": [] 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "subsystem": "accel", 00:25:54.861 "config": [ 00:25:54.861 { 00:25:54.861 "method": "accel_set_options", 00:25:54.861 "params": { 00:25:54.861 "small_cache_size": 128, 00:25:54.861 "large_cache_size": 16, 00:25:54.861 "task_count": 2048, 00:25:54.861 "sequence_count": 2048, 00:25:54.861 "buf_count": 2048 00:25:54.861 } 00:25:54.861 } 00:25:54.861 ] 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "subsystem": "bdev", 00:25:54.861 "config": [ 00:25:54.861 { 00:25:54.861 "method": "bdev_set_options", 00:25:54.861 "params": { 00:25:54.861 "bdev_io_pool_size": 65535, 00:25:54.861 "bdev_io_cache_size": 256, 00:25:54.861 "bdev_auto_examine": true, 00:25:54.861 "iobuf_small_cache_size": 128, 00:25:54.861 "iobuf_large_cache_size": 16 00:25:54.861 } 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "method": "bdev_raid_set_options", 00:25:54.861 "params": { 00:25:54.861 "process_window_size_kb": 1024 00:25:54.861 } 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "method": "bdev_iscsi_set_options", 00:25:54.861 "params": { 00:25:54.861 "timeout_sec": 30 00:25:54.861 } 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "method": "bdev_nvme_set_options", 00:25:54.861 "params": { 00:25:54.861 "action_on_timeout": "none", 00:25:54.861 "timeout_us": 0, 00:25:54.861 "timeout_admin_us": 0, 00:25:54.861 "keep_alive_timeout_ms": 10000, 00:25:54.861 "transport_retry_count": 4, 00:25:54.861 "arbitration_burst": 0, 00:25:54.861 "low_priority_weight": 0, 00:25:54.861 "medium_priority_weight": 0, 00:25:54.861 "high_priority_weight": 0, 00:25:54.861 "nvme_adminq_poll_period_us": 10000, 00:25:54.861 "nvme_ioq_poll_period_us": 0, 00:25:54.861 "io_queue_requests": 512, 00:25:54.861 "delay_cmd_submit": true, 00:25:54.861 "bdev_retry_count": 3, 00:25:54.861 "transport_ack_timeout": 0, 00:25:54.861 "ctrlr_loss_timeout_sec": 0, 00:25:54.861 "reconnect_delay_sec": 0, 00:25:54.861 "fast_io_fail_timeout_sec": 0, 00:25:54.861 "generate_uuids": false, 00:25:54.861 "transport_tos": 0, 00:25:54.861 "io_path_stat": false, 00:25:54.861 "allow_accel_sequence": false 00:25:54.861 } 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "method": "bdev_nvme_attach_controller", 00:25:54.861 "params": { 00:25:54.861 "name": "TLSTEST", 00:25:54.861 "trtype": "TCP", 00:25:54.861 "adrfam": "IPv4", 00:25:54.861 "traddr": "10.0.0.2", 00:25:54.861 "trsvcid": "4420", 00:25:54.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.861 "prchk_reftag": false, 00:25:54.861 "prchk_guard": false, 00:25:54.861 "ctrlr_loss_timeout_sec": 0, 00:25:54.861 "reconnect_delay_sec": 0, 00:25:54.861 "fast_io_fail_timeout_sec": 0, 00:25:54.861 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:54.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:54.861 "hdgst": false, 00:25:54.861 "ddgst": false 00:25:54.861 } 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "method": "bdev_nvme_set_hotplug", 00:25:54.861 "params": { 00:25:54.861 "period_us": 100000, 00:25:54.861 "enable": false 00:25:54.861 } 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "method": "bdev_wait_for_examine" 00:25:54.861 } 00:25:54.861 ] 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "subsystem": "nbd", 00:25:54.861 "config": [] 00:25:54.861 } 00:25:54.861 ] 00:25:54.861 }' 00:25:55.121 [2024-04-26 20:43:13.250318] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:55.121 [2024-04-26 20:43:13.250432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636189 ] 00:25:55.121 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.121 [2024-04-26 20:43:13.360694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.121 [2024-04-26 20:43:13.454035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.379 [2024-04-26 20:43:13.648205] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:55.637 20:43:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:55.637 20:43:13 -- common/autotest_common.sh@852 -- # return 0 00:25:55.637 20:43:13 -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:55.896 Running I/O for 10 seconds... 00:26:05.883 00:26:05.883 Latency(us) 00:26:05.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.883 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:05.883 Verification LBA range: start 0x0 length 0x2000 00:26:05.883 TLSTESTn1 : 10.01 4568.64 17.85 0.00 0.00 27991.04 4691.00 60431.09 00:26:05.883 =================================================================================================================== 00:26:05.883 Total : 4568.64 17.85 0.00 0.00 27991.04 4691.00 60431.09 00:26:05.883 0 00:26:05.883 20:43:24 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:05.883 20:43:24 -- target/tls.sh@223 -- # killprocess 3636189 00:26:05.883 20:43:24 -- common/autotest_common.sh@926 -- # '[' -z 3636189 ']' 00:26:05.883 20:43:24 -- common/autotest_common.sh@930 -- # kill -0 3636189 00:26:05.883 20:43:24 -- common/autotest_common.sh@931 -- # uname 00:26:05.883 20:43:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:05.883 20:43:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3636189 00:26:05.883 20:43:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:05.883 20:43:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:05.883 20:43:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3636189' 00:26:05.883 killing process with pid 3636189 00:26:05.883 20:43:24 -- common/autotest_common.sh@945 -- # kill 3636189 00:26:05.883 Received shutdown signal, test time was about 10.000000 seconds 00:26:05.883 00:26:05.883 Latency(us) 00:26:05.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.883 =================================================================================================================== 00:26:05.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.883 20:43:24 -- common/autotest_common.sh@950 -- # wait 3636189 00:26:06.144 20:43:24 -- target/tls.sh@224 -- # killprocess 3635873 00:26:06.144 20:43:24 -- common/autotest_common.sh@926 -- # '[' -z 3635873 ']' 00:26:06.144 20:43:24 -- common/autotest_common.sh@930 -- # kill -0 3635873 00:26:06.144 20:43:24 -- common/autotest_common.sh@931 -- # uname 00:26:06.144 20:43:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:06.144 20:43:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3635873 00:26:06.405 20:43:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:06.405 20:43:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:06.405 20:43:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3635873' 00:26:06.405 killing process with pid 3635873 00:26:06.405 20:43:24 -- common/autotest_common.sh@945 -- # kill 3635873 00:26:06.405 20:43:24 -- common/autotest_common.sh@950 -- # wait 3635873 00:26:06.666 20:43:24 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:26:06.666 20:43:25 -- target/tls.sh@227 -- # cleanup 00:26:06.666 20:43:25 -- target/tls.sh@15 -- # process_shm --id 0 00:26:06.666 20:43:25 -- common/autotest_common.sh@796 -- # type=--id 00:26:06.666 20:43:25 -- common/autotest_common.sh@797 -- # id=0 00:26:06.666 20:43:25 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:26:06.666 20:43:25 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:06.925 20:43:25 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:26:06.925 20:43:25 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:26:06.925 20:43:25 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:26:06.925 20:43:25 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:06.925 nvmf_trace.0 00:26:06.925 20:43:25 -- common/autotest_common.sh@811 -- # return 0 00:26:06.925 20:43:25 -- target/tls.sh@16 -- # killprocess 3636189 00:26:06.925 20:43:25 -- common/autotest_common.sh@926 -- # '[' -z 3636189 ']' 00:26:06.925 20:43:25 -- common/autotest_common.sh@930 -- # kill -0 3636189 00:26:06.925 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3636189) - No such process 00:26:06.925 20:43:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3636189 is not found' 00:26:06.925 Process with pid 3636189 is not found 00:26:06.925 20:43:25 -- target/tls.sh@17 -- # nvmftestfini 00:26:06.925 20:43:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:06.925 20:43:25 -- nvmf/common.sh@116 -- # sync 00:26:06.925 20:43:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:06.925 20:43:25 -- nvmf/common.sh@119 -- # set +e 00:26:06.925 20:43:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:06.925 20:43:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:06.926 rmmod nvme_tcp 00:26:06.926 rmmod nvme_fabrics 00:26:06.926 rmmod nvme_keyring 00:26:06.926 20:43:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:06.926 20:43:25 -- nvmf/common.sh@123 -- # set -e 00:26:06.926 20:43:25 -- nvmf/common.sh@124 -- # return 0 00:26:06.926 20:43:25 -- nvmf/common.sh@477 -- # '[' -n 3635873 ']' 00:26:06.926 20:43:25 -- nvmf/common.sh@478 -- # killprocess 3635873 00:26:06.926 20:43:25 -- common/autotest_common.sh@926 -- # '[' -z 3635873 ']' 00:26:06.926 20:43:25 -- common/autotest_common.sh@930 -- # kill -0 3635873 00:26:06.926 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3635873) - No such process 00:26:06.926 20:43:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3635873 is not found' 00:26:06.926 Process with pid 3635873 is not found 00:26:06.926 20:43:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:06.926 20:43:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:06.926 20:43:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:06.926 20:43:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.926 20:43:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:06.926 20:43:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.926 20:43:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.926 20:43:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.463 20:43:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:09.463 20:43:27 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:26:09.463 00:26:09.463 real 1m13.420s 00:26:09.463 user 1m53.637s 00:26:09.463 sys 0m20.114s 00:26:09.463 20:43:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.463 20:43:27 -- common/autotest_common.sh@10 -- # set +x 00:26:09.463 ************************************ 00:26:09.463 END TEST nvmf_tls 00:26:09.463 ************************************ 00:26:09.463 20:43:27 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:09.463 20:43:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:09.463 20:43:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:09.463 20:43:27 -- common/autotest_common.sh@10 -- # set +x 00:26:09.463 ************************************ 00:26:09.463 START TEST nvmf_fips 00:26:09.463 ************************************ 00:26:09.463 20:43:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:09.463 * Looking for test storage... 00:26:09.463 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:26:09.463 20:43:27 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.463 20:43:27 -- nvmf/common.sh@7 -- # uname -s 00:26:09.463 20:43:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.463 20:43:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.463 20:43:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.463 20:43:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.463 20:43:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.463 20:43:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.463 20:43:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.463 20:43:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.463 20:43:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.463 20:43:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.463 20:43:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:26:09.463 20:43:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:26:09.463 20:43:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.463 20:43:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.463 20:43:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:09.463 20:43:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:09.463 20:43:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.463 20:43:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.463 20:43:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.464 20:43:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.464 20:43:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.464 20:43:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.464 20:43:27 -- paths/export.sh@5 -- # export PATH 00:26:09.464 20:43:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.464 20:43:27 -- nvmf/common.sh@46 -- # : 0 00:26:09.464 20:43:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:09.464 20:43:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:09.464 20:43:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:09.464 20:43:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.464 20:43:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.464 20:43:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:09.464 20:43:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:09.464 20:43:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:09.464 20:43:27 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:26:09.464 20:43:27 -- fips/fips.sh@89 -- # check_openssl_version 00:26:09.464 20:43:27 -- fips/fips.sh@83 -- # local target=3.0.0 00:26:09.464 20:43:27 -- fips/fips.sh@85 -- # openssl version 00:26:09.464 20:43:27 -- fips/fips.sh@85 -- # awk '{print $2}' 00:26:09.464 20:43:27 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:26:09.464 20:43:27 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:26:09.464 20:43:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:09.464 20:43:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:09.464 20:43:27 -- scripts/common.sh@335 -- # IFS=.-: 00:26:09.464 20:43:27 -- scripts/common.sh@335 -- # read -ra ver1 00:26:09.464 20:43:27 -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.464 20:43:27 -- scripts/common.sh@336 -- # read -ra ver2 00:26:09.464 20:43:27 -- scripts/common.sh@337 -- # local 'op=>=' 00:26:09.464 20:43:27 -- scripts/common.sh@339 -- # ver1_l=3 00:26:09.464 20:43:27 -- scripts/common.sh@340 -- # ver2_l=3 00:26:09.464 20:43:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:09.464 20:43:27 -- scripts/common.sh@343 -- # case "$op" in 00:26:09.464 20:43:27 -- scripts/common.sh@347 -- # : 1 00:26:09.464 20:43:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:09.464 20:43:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.464 20:43:27 -- scripts/common.sh@364 -- # decimal 3 00:26:09.464 20:43:27 -- scripts/common.sh@352 -- # local d=3 00:26:09.464 20:43:27 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:09.464 20:43:27 -- scripts/common.sh@354 -- # echo 3 00:26:09.464 20:43:27 -- scripts/common.sh@364 -- # ver1[v]=3 00:26:09.464 20:43:27 -- scripts/common.sh@365 -- # decimal 3 00:26:09.464 20:43:27 -- scripts/common.sh@352 -- # local d=3 00:26:09.464 20:43:27 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:09.464 20:43:27 -- scripts/common.sh@354 -- # echo 3 00:26:09.464 20:43:27 -- scripts/common.sh@365 -- # ver2[v]=3 00:26:09.464 20:43:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:09.464 20:43:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:09.464 20:43:27 -- scripts/common.sh@363 -- # (( v++ )) 00:26:09.464 20:43:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.464 20:43:27 -- scripts/common.sh@364 -- # decimal 0 00:26:09.464 20:43:27 -- scripts/common.sh@352 -- # local d=0 00:26:09.464 20:43:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:09.464 20:43:27 -- scripts/common.sh@354 -- # echo 0 00:26:09.464 20:43:27 -- scripts/common.sh@364 -- # ver1[v]=0 00:26:09.464 20:43:27 -- scripts/common.sh@365 -- # decimal 0 00:26:09.464 20:43:27 -- scripts/common.sh@352 -- # local d=0 00:26:09.464 20:43:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:09.464 20:43:27 -- scripts/common.sh@354 -- # echo 0 00:26:09.464 20:43:27 -- scripts/common.sh@365 -- # ver2[v]=0 00:26:09.464 20:43:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:09.464 20:43:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:09.464 20:43:27 -- scripts/common.sh@363 -- # (( v++ )) 00:26:09.464 20:43:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.464 20:43:27 -- scripts/common.sh@364 -- # decimal 9 00:26:09.464 20:43:27 -- scripts/common.sh@352 -- # local d=9 00:26:09.464 20:43:27 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:26:09.464 20:43:27 -- scripts/common.sh@354 -- # echo 9 00:26:09.464 20:43:27 -- scripts/common.sh@364 -- # ver1[v]=9 00:26:09.464 20:43:27 -- scripts/common.sh@365 -- # decimal 0 00:26:09.464 20:43:27 -- scripts/common.sh@352 -- # local d=0 00:26:09.464 20:43:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:09.464 20:43:27 -- scripts/common.sh@354 -- # echo 0 00:26:09.464 20:43:27 -- scripts/common.sh@365 -- # ver2[v]=0 00:26:09.464 20:43:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:09.464 20:43:27 -- scripts/common.sh@366 -- # return 0 00:26:09.464 20:43:27 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:26:09.464 20:43:27 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:09.464 20:43:27 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:26:09.464 20:43:27 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:09.464 20:43:27 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:09.464 20:43:27 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:26:09.464 20:43:27 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:26:09.464 20:43:27 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:26:09.464 20:43:27 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:26:09.464 20:43:27 -- fips/fips.sh@114 -- # build_openssl_config 00:26:09.464 20:43:27 -- fips/fips.sh@37 -- # cat 00:26:09.464 20:43:27 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:26:09.464 20:43:27 -- fips/fips.sh@58 -- # cat - 00:26:09.464 20:43:27 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:09.464 20:43:27 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:26:09.464 20:43:27 -- fips/fips.sh@117 -- # mapfile -t providers 00:26:09.464 20:43:27 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:26:09.464 20:43:27 -- fips/fips.sh@117 -- # openssl list -providers 00:26:09.464 20:43:27 -- fips/fips.sh@117 -- # grep name 00:26:09.464 20:43:27 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:26:09.464 20:43:27 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:26:09.464 20:43:27 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:09.464 20:43:27 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:26:09.464 20:43:27 -- common/autotest_common.sh@640 -- # local es=0 00:26:09.464 20:43:27 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:09.464 20:43:27 -- common/autotest_common.sh@628 -- # local arg=openssl 00:26:09.464 20:43:27 -- fips/fips.sh@128 -- # : 00:26:09.464 20:43:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.464 20:43:27 -- common/autotest_common.sh@632 -- # type -t openssl 00:26:09.464 20:43:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.464 20:43:27 -- common/autotest_common.sh@634 -- # type -P openssl 00:26:09.464 20:43:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:09.464 20:43:27 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:26:09.464 20:43:27 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:26:09.464 20:43:27 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:26:09.464 Error setting digest 00:26:09.464 00D2B1177B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:26:09.464 00D2B1177B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:26:09.464 20:43:27 -- common/autotest_common.sh@643 -- # es=1 00:26:09.464 20:43:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:09.464 20:43:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:09.464 20:43:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:09.464 20:43:27 -- fips/fips.sh@131 -- # nvmftestinit 00:26:09.464 20:43:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:09.464 20:43:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.464 20:43:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:09.464 20:43:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:09.464 20:43:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:09.464 20:43:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.464 20:43:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.464 20:43:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.464 20:43:27 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:26:09.464 20:43:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:09.464 20:43:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:09.464 20:43:27 -- common/autotest_common.sh@10 -- # set +x 00:26:14.739 20:43:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:14.739 20:43:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:14.739 20:43:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:14.739 20:43:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:14.739 20:43:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:14.739 20:43:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:14.739 20:43:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:14.739 20:43:32 -- nvmf/common.sh@294 -- # net_devs=() 00:26:14.739 20:43:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:14.739 20:43:32 -- nvmf/common.sh@295 -- # e810=() 00:26:14.739 20:43:32 -- nvmf/common.sh@295 -- # local -ga e810 00:26:14.739 20:43:32 -- nvmf/common.sh@296 -- # x722=() 00:26:14.739 20:43:32 -- nvmf/common.sh@296 -- # local -ga x722 00:26:14.739 20:43:32 -- nvmf/common.sh@297 -- # mlx=() 00:26:14.739 20:43:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:14.739 20:43:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.739 20:43:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:14.739 20:43:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:14.739 20:43:32 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:26:14.739 20:43:32 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:26:14.739 20:43:32 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:14.740 20:43:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:14.740 20:43:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:14.740 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:14.740 20:43:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:14.740 20:43:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:14.740 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:14.740 20:43:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:14.740 20:43:32 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:14.740 20:43:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.740 20:43:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:14.740 20:43:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.740 20:43:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:14.740 Found net devices under 0000:27:00.0: cvl_0_0 00:26:14.740 20:43:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.740 20:43:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:14.740 20:43:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.740 20:43:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:14.740 20:43:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.740 20:43:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:14.740 Found net devices under 0000:27:00.1: cvl_0_1 00:26:14.740 20:43:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.740 20:43:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:14.740 20:43:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:14.740 20:43:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:14.740 20:43:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.740 20:43:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.740 20:43:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.740 20:43:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:14.740 20:43:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.740 20:43:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.740 20:43:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:14.740 20:43:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.740 20:43:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.740 20:43:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:14.740 20:43:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:14.740 20:43:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.740 20:43:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.740 20:43:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.740 20:43:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.740 20:43:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:14.740 20:43:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.740 20:43:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.740 20:43:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.740 20:43:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:14.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:26:14.740 00:26:14.740 --- 10.0.0.2 ping statistics --- 00:26:14.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.740 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:26:14.740 20:43:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:26:14.740 00:26:14.740 --- 10.0.0.1 ping statistics --- 00:26:14.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.740 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:26:14.740 20:43:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.740 20:43:32 -- nvmf/common.sh@410 -- # return 0 00:26:14.740 20:43:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:14.740 20:43:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.740 20:43:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:14.740 20:43:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.740 20:43:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:14.740 20:43:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:14.740 20:43:32 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:26:14.740 20:43:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:14.740 20:43:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:14.740 20:43:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.740 20:43:32 -- nvmf/common.sh@469 -- # nvmfpid=3642262 00:26:14.740 20:43:32 -- nvmf/common.sh@470 -- # waitforlisten 3642262 00:26:14.740 20:43:32 -- common/autotest_common.sh@819 -- # '[' -z 3642262 ']' 00:26:14.740 20:43:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.740 20:43:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:14.740 20:43:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.740 20:43:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:14.740 20:43:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:14.740 20:43:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.740 [2024-04-26 20:43:32.940285] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:14.740 [2024-04-26 20:43:32.940428] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.740 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.998 [2024-04-26 20:43:33.084688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.998 [2024-04-26 20:43:33.183079] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:14.998 [2024-04-26 20:43:33.183285] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.998 [2024-04-26 20:43:33.183301] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.998 [2024-04-26 20:43:33.183312] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.998 [2024-04-26 20:43:33.183349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.562 20:43:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:15.562 20:43:33 -- common/autotest_common.sh@852 -- # return 0 00:26:15.562 20:43:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:15.562 20:43:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:15.562 20:43:33 -- common/autotest_common.sh@10 -- # set +x 00:26:15.562 20:43:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.562 20:43:33 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:26:15.562 20:43:33 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:15.562 20:43:33 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:15.562 20:43:33 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:15.562 20:43:33 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:15.562 20:43:33 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:15.562 20:43:33 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:15.562 20:43:33 -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:26:15.562 [2024-04-26 20:43:33.765471] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.562 [2024-04-26 20:43:33.781430] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:15.562 [2024-04-26 20:43:33.781635] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.562 malloc0 00:26:15.562 20:43:33 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:15.562 20:43:33 -- fips/fips.sh@148 -- # bdevperf_pid=3642556 00:26:15.562 20:43:33 -- fips/fips.sh@149 -- # waitforlisten 3642556 /var/tmp/bdevperf.sock 00:26:15.562 20:43:33 -- common/autotest_common.sh@819 -- # '[' -z 3642556 ']' 00:26:15.562 20:43:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.562 20:43:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:15.562 20:43:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.562 20:43:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:15.562 20:43:33 -- common/autotest_common.sh@10 -- # set +x 00:26:15.562 20:43:33 -- fips/fips.sh@146 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:15.822 [2024-04-26 20:43:33.946377] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:15.822 [2024-04-26 20:43:33.946504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642556 ] 00:26:15.822 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.822 [2024-04-26 20:43:34.058727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.822 [2024-04-26 20:43:34.152891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.389 20:43:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:16.389 20:43:34 -- common/autotest_common.sh@852 -- # return 0 00:26:16.389 20:43:34 -- fips/fips.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:16.647 [2024-04-26 20:43:34.756334] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:16.647 TLSTESTn1 00:26:16.647 20:43:34 -- fips/fips.sh@155 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:16.647 Running I/O for 10 seconds... 00:26:26.630 00:26:26.630 Latency(us) 00:26:26.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.630 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:26.630 Verification LBA range: start 0x0 length 0x2000 00:26:26.631 TLSTESTn1 : 10.01 4573.69 17.87 0.00 0.00 27960.23 4484.04 50221.27 00:26:26.631 =================================================================================================================== 00:26:26.631 Total : 4573.69 17.87 0.00 0.00 27960.23 4484.04 50221.27 00:26:26.631 0 00:26:26.631 20:43:44 -- fips/fips.sh@1 -- # cleanup 00:26:26.631 20:43:44 -- fips/fips.sh@15 -- # process_shm --id 0 00:26:26.631 20:43:44 -- common/autotest_common.sh@796 -- # type=--id 00:26:26.631 20:43:44 -- common/autotest_common.sh@797 -- # id=0 00:26:26.631 20:43:44 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:26:26.631 20:43:44 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:26.631 20:43:44 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:26:26.631 20:43:44 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:26:26.631 20:43:44 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:26:26.631 20:43:44 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:26.631 nvmf_trace.0 00:26:26.887 20:43:45 -- common/autotest_common.sh@811 -- # return 0 00:26:26.887 20:43:45 -- fips/fips.sh@16 -- # killprocess 3642556 00:26:26.887 20:43:45 -- common/autotest_common.sh@926 -- # '[' -z 3642556 ']' 00:26:26.887 20:43:45 -- common/autotest_common.sh@930 -- # kill -0 3642556 00:26:26.887 20:43:45 -- common/autotest_common.sh@931 -- # uname 00:26:26.887 20:43:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:26.887 20:43:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3642556 00:26:26.887 20:43:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:26.887 20:43:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:26.887 20:43:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3642556' 00:26:26.887 killing process with pid 3642556 00:26:26.887 20:43:45 -- common/autotest_common.sh@945 -- # kill 3642556 00:26:26.887 Received shutdown signal, test time was about 10.000000 seconds 00:26:26.887 00:26:26.887 Latency(us) 00:26:26.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.887 =================================================================================================================== 00:26:26.887 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.887 20:43:45 -- common/autotest_common.sh@950 -- # wait 3642556 00:26:27.145 20:43:45 -- fips/fips.sh@17 -- # nvmftestfini 00:26:27.145 20:43:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:27.145 20:43:45 -- nvmf/common.sh@116 -- # sync 00:26:27.145 20:43:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:27.145 20:43:45 -- nvmf/common.sh@119 -- # set +e 00:26:27.145 20:43:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:27.145 20:43:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:27.145 rmmod nvme_tcp 00:26:27.145 rmmod nvme_fabrics 00:26:27.145 rmmod nvme_keyring 00:26:27.145 20:43:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:27.403 20:43:45 -- nvmf/common.sh@123 -- # set -e 00:26:27.403 20:43:45 -- nvmf/common.sh@124 -- # return 0 00:26:27.403 20:43:45 -- nvmf/common.sh@477 -- # '[' -n 3642262 ']' 00:26:27.403 20:43:45 -- nvmf/common.sh@478 -- # killprocess 3642262 00:26:27.403 20:43:45 -- common/autotest_common.sh@926 -- # '[' -z 3642262 ']' 00:26:27.403 20:43:45 -- common/autotest_common.sh@930 -- # kill -0 3642262 00:26:27.403 20:43:45 -- common/autotest_common.sh@931 -- # uname 00:26:27.403 20:43:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:27.403 20:43:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3642262 00:26:27.403 20:43:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:27.403 20:43:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:27.403 20:43:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3642262' 00:26:27.403 killing process with pid 3642262 00:26:27.403 20:43:45 -- common/autotest_common.sh@945 -- # kill 3642262 00:26:27.403 20:43:45 -- common/autotest_common.sh@950 -- # wait 3642262 00:26:27.972 20:43:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:27.972 20:43:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:27.972 20:43:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:27.972 20:43:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:27.972 20:43:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:27.972 20:43:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.972 20:43:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:27.972 20:43:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.881 20:43:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:29.881 20:43:48 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:29.881 00:26:29.881 real 0m20.880s 00:26:29.881 user 0m24.533s 00:26:29.881 sys 0m6.919s 00:26:29.881 20:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.881 20:43:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.881 ************************************ 00:26:29.881 END TEST nvmf_fips 00:26:29.881 ************************************ 00:26:29.881 20:43:48 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:26:29.881 20:43:48 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:29.881 20:43:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:29.881 20:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:29.881 20:43:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.881 ************************************ 00:26:29.881 START TEST nvmf_fuzz 00:26:29.881 ************************************ 00:26:29.881 20:43:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:29.881 * Looking for test storage... 00:26:29.881 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:26:29.881 20:43:48 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.881 20:43:48 -- nvmf/common.sh@7 -- # uname -s 00:26:29.881 20:43:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.881 20:43:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.881 20:43:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.142 20:43:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.142 20:43:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.142 20:43:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.142 20:43:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.142 20:43:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.142 20:43:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.142 20:43:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.142 20:43:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:26:30.142 20:43:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:26:30.142 20:43:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.142 20:43:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.142 20:43:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:30.142 20:43:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:30.142 20:43:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.142 20:43:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.142 20:43:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.142 20:43:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.142 20:43:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.142 20:43:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.142 20:43:48 -- paths/export.sh@5 -- # export PATH 00:26:30.142 20:43:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.142 20:43:48 -- nvmf/common.sh@46 -- # : 0 00:26:30.142 20:43:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:30.142 20:43:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:30.142 20:43:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:30.142 20:43:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.142 20:43:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.142 20:43:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:30.142 20:43:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:30.142 20:43:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:30.142 20:43:48 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:30.142 20:43:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:30.142 20:43:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.142 20:43:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:30.142 20:43:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:30.142 20:43:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:30.142 20:43:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.142 20:43:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.142 20:43:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.142 20:43:48 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:26:30.142 20:43:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:30.142 20:43:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:30.142 20:43:48 -- common/autotest_common.sh@10 -- # set +x 00:26:35.416 20:43:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:35.416 20:43:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:35.416 20:43:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:35.417 20:43:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:35.417 20:43:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:35.417 20:43:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:35.417 20:43:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:35.417 20:43:53 -- nvmf/common.sh@294 -- # net_devs=() 00:26:35.417 20:43:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:35.417 20:43:53 -- nvmf/common.sh@295 -- # e810=() 00:26:35.417 20:43:53 -- nvmf/common.sh@295 -- # local -ga e810 00:26:35.417 20:43:53 -- nvmf/common.sh@296 -- # x722=() 00:26:35.417 20:43:53 -- nvmf/common.sh@296 -- # local -ga x722 00:26:35.417 20:43:53 -- nvmf/common.sh@297 -- # mlx=() 00:26:35.417 20:43:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:35.417 20:43:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.417 20:43:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:35.417 20:43:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:35.417 20:43:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:35.417 20:43:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:35.417 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:35.417 20:43:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:35.417 20:43:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:35.417 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:35.417 20:43:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:35.417 20:43:53 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:35.417 20:43:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.417 20:43:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:35.417 20:43:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.417 20:43:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:35.417 Found net devices under 0000:27:00.0: cvl_0_0 00:26:35.417 20:43:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.417 20:43:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:35.417 20:43:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.417 20:43:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:35.417 20:43:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.417 20:43:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:35.417 Found net devices under 0000:27:00.1: cvl_0_1 00:26:35.417 20:43:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.417 20:43:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:35.417 20:43:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:35.417 20:43:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:35.417 20:43:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:35.417 20:43:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.417 20:43:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.417 20:43:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.417 20:43:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:35.417 20:43:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.417 20:43:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.417 20:43:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:35.417 20:43:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.417 20:43:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.417 20:43:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:35.417 20:43:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:35.417 20:43:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.417 20:43:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.417 20:43:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.417 20:43:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.417 20:43:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:35.417 20:43:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.417 20:43:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.417 20:43:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.417 20:43:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:35.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:26:35.417 00:26:35.417 --- 10.0.0.2 ping statistics --- 00:26:35.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.417 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:26:35.417 20:43:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:26:35.417 00:26:35.417 --- 10.0.0.1 ping statistics --- 00:26:35.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.417 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:26:35.417 20:43:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.417 20:43:53 -- nvmf/common.sh@410 -- # return 0 00:26:35.417 20:43:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:35.417 20:43:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.417 20:43:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:35.418 20:43:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:35.418 20:43:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.418 20:43:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:35.418 20:43:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:35.418 20:43:53 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3648864 00:26:35.418 20:43:53 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:35.418 20:43:53 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3648864 00:26:35.418 20:43:53 -- common/autotest_common.sh@819 -- # '[' -z 3648864 ']' 00:26:35.418 20:43:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.418 20:43:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:35.418 20:43:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.418 20:43:53 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:35.418 20:43:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:35.418 20:43:53 -- common/autotest_common.sh@10 -- # set +x 00:26:36.356 20:43:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:36.356 20:43:54 -- common/autotest_common.sh@852 -- # return 0 00:26:36.356 20:43:54 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.356 20:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.356 20:43:54 -- common/autotest_common.sh@10 -- # set +x 00:26:36.356 20:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.356 20:43:54 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:36.356 20:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.356 20:43:54 -- common/autotest_common.sh@10 -- # set +x 00:26:36.356 Malloc0 00:26:36.356 20:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.356 20:43:54 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.356 20:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.356 20:43:54 -- common/autotest_common.sh@10 -- # set +x 00:26:36.356 20:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.356 20:43:54 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.356 20:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.356 20:43:54 -- common/autotest_common.sh@10 -- # set +x 00:26:36.356 20:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.356 20:43:54 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.356 20:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.356 20:43:54 -- common/autotest_common.sh@10 -- # set +x 00:26:36.356 20:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.356 20:43:54 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:36.356 20:43:54 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:08.521 Fuzzing completed. Shutting down the fuzz application 00:27:08.521 00:27:08.521 Dumping successful admin opcodes: 00:27:08.521 8, 9, 10, 24, 00:27:08.521 Dumping successful io opcodes: 00:27:08.521 0, 9, 00:27:08.521 NS: 0x200003aefec0 I/O qp, Total commands completed: 838586, total successful commands: 4872, random_seed: 4159630528 00:27:08.521 NS: 0x200003aefec0 admin qp, Total commands completed: 77056, total successful commands: 596, random_seed: 1058757696 00:27:08.521 20:44:25 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:08.521 Fuzzing completed. Shutting down the fuzz application 00:27:08.521 00:27:08.521 Dumping successful admin opcodes: 00:27:08.521 24, 00:27:08.521 Dumping successful io opcodes: 00:27:08.521 00:27:08.521 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1716913291 00:27:08.521 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1717004611 00:27:08.521 20:44:26 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.521 20:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.521 20:44:26 -- common/autotest_common.sh@10 -- # set +x 00:27:08.521 20:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.521 20:44:26 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:08.521 20:44:26 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:08.521 20:44:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:08.521 20:44:26 -- nvmf/common.sh@116 -- # sync 00:27:08.521 20:44:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:08.521 20:44:26 -- nvmf/common.sh@119 -- # set +e 00:27:08.521 20:44:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:08.521 20:44:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:08.521 rmmod nvme_tcp 00:27:08.521 rmmod nvme_fabrics 00:27:08.521 rmmod nvme_keyring 00:27:08.521 20:44:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:08.521 20:44:26 -- nvmf/common.sh@123 -- # set -e 00:27:08.521 20:44:26 -- nvmf/common.sh@124 -- # return 0 00:27:08.521 20:44:26 -- nvmf/common.sh@477 -- # '[' -n 3648864 ']' 00:27:08.521 20:44:26 -- nvmf/common.sh@478 -- # killprocess 3648864 00:27:08.521 20:44:26 -- common/autotest_common.sh@926 -- # '[' -z 3648864 ']' 00:27:08.521 20:44:26 -- common/autotest_common.sh@930 -- # kill -0 3648864 00:27:08.521 20:44:26 -- common/autotest_common.sh@931 -- # uname 00:27:08.521 20:44:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:08.521 20:44:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3648864 00:27:08.521 20:44:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:08.521 20:44:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:08.521 20:44:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3648864' 00:27:08.521 killing process with pid 3648864 00:27:08.521 20:44:26 -- common/autotest_common.sh@945 -- # kill 3648864 00:27:08.521 20:44:26 -- common/autotest_common.sh@950 -- # wait 3648864 00:27:09.174 20:44:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:09.174 20:44:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:09.174 20:44:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:09.174 20:44:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.174 20:44:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:09.174 20:44:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.174 20:44:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.174 20:44:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.081 20:44:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:11.081 20:44:29 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:11.081 00:27:11.081 real 0m41.241s 00:27:11.081 user 0m58.720s 00:27:11.081 sys 0m11.843s 00:27:11.081 20:44:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.081 20:44:29 -- common/autotest_common.sh@10 -- # set +x 00:27:11.081 ************************************ 00:27:11.081 END TEST nvmf_fuzz 00:27:11.081 ************************************ 00:27:11.342 20:44:29 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:11.342 20:44:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:11.342 20:44:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:11.342 20:44:29 -- common/autotest_common.sh@10 -- # set +x 00:27:11.342 ************************************ 00:27:11.342 START TEST nvmf_multiconnection 00:27:11.342 ************************************ 00:27:11.342 20:44:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:11.342 * Looking for test storage... 00:27:11.342 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:27:11.342 20:44:29 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.342 20:44:29 -- nvmf/common.sh@7 -- # uname -s 00:27:11.342 20:44:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.342 20:44:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.342 20:44:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.342 20:44:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.342 20:44:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.342 20:44:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.342 20:44:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.342 20:44:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.342 20:44:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.342 20:44:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.342 20:44:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:27:11.342 20:44:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:27:11.342 20:44:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.342 20:44:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.342 20:44:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:11.342 20:44:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:11.342 20:44:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.342 20:44:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.342 20:44:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.342 20:44:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.342 20:44:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.342 20:44:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.342 20:44:29 -- paths/export.sh@5 -- # export PATH 00:27:11.342 20:44:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.342 20:44:29 -- nvmf/common.sh@46 -- # : 0 00:27:11.342 20:44:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:11.342 20:44:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:11.342 20:44:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:11.342 20:44:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.342 20:44:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.342 20:44:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:11.342 20:44:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:11.342 20:44:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:11.342 20:44:29 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:11.342 20:44:29 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:11.342 20:44:29 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:11.342 20:44:29 -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:11.342 20:44:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:11.342 20:44:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.342 20:44:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:11.342 20:44:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:11.342 20:44:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:11.342 20:44:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.342 20:44:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.342 20:44:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.342 20:44:29 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:27:11.342 20:44:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:11.342 20:44:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:11.342 20:44:29 -- common/autotest_common.sh@10 -- # set +x 00:27:16.625 20:44:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:16.626 20:44:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:16.626 20:44:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:16.626 20:44:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:16.626 20:44:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:16.626 20:44:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:16.626 20:44:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:16.626 20:44:34 -- nvmf/common.sh@294 -- # net_devs=() 00:27:16.626 20:44:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:16.626 20:44:34 -- nvmf/common.sh@295 -- # e810=() 00:27:16.626 20:44:34 -- nvmf/common.sh@295 -- # local -ga e810 00:27:16.626 20:44:34 -- nvmf/common.sh@296 -- # x722=() 00:27:16.626 20:44:34 -- nvmf/common.sh@296 -- # local -ga x722 00:27:16.626 20:44:34 -- nvmf/common.sh@297 -- # mlx=() 00:27:16.626 20:44:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:16.626 20:44:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.626 20:44:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:16.626 20:44:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:16.626 20:44:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:16.626 20:44:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:16.626 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:16.626 20:44:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:16.626 20:44:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:16.626 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:16.626 20:44:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:16.626 20:44:34 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:16.626 20:44:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.626 20:44:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:16.626 20:44:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.626 20:44:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:16.626 Found net devices under 0000:27:00.0: cvl_0_0 00:27:16.626 20:44:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.626 20:44:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:16.626 20:44:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.626 20:44:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:16.626 20:44:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.626 20:44:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:16.626 Found net devices under 0000:27:00.1: cvl_0_1 00:27:16.626 20:44:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.626 20:44:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:16.626 20:44:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:16.626 20:44:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:16.626 20:44:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.626 20:44:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.626 20:44:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.626 20:44:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:16.626 20:44:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.626 20:44:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.626 20:44:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:16.626 20:44:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.626 20:44:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.626 20:44:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:16.626 20:44:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:16.626 20:44:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.626 20:44:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.626 20:44:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.626 20:44:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.626 20:44:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:16.626 20:44:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.626 20:44:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.626 20:44:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.626 20:44:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:16.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:27:16.626 00:27:16.626 --- 10.0.0.2 ping statistics --- 00:27:16.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.626 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:27:16.626 20:44:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:27:16.626 00:27:16.626 --- 10.0.0.1 ping statistics --- 00:27:16.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.626 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:27:16.626 20:44:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.626 20:44:34 -- nvmf/common.sh@410 -- # return 0 00:27:16.626 20:44:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:16.626 20:44:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.626 20:44:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:16.626 20:44:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.626 20:44:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:16.626 20:44:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:16.626 20:44:34 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:16.626 20:44:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:16.626 20:44:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:16.626 20:44:34 -- common/autotest_common.sh@10 -- # set +x 00:27:16.626 20:44:34 -- nvmf/common.sh@469 -- # nvmfpid=3659222 00:27:16.626 20:44:34 -- nvmf/common.sh@470 -- # waitforlisten 3659222 00:27:16.626 20:44:34 -- common/autotest_common.sh@819 -- # '[' -z 3659222 ']' 00:27:16.626 20:44:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.626 20:44:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:16.626 20:44:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.627 20:44:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:16.627 20:44:34 -- common/autotest_common.sh@10 -- # set +x 00:27:16.627 20:44:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:16.627 [2024-04-26 20:44:34.920656] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:16.627 [2024-04-26 20:44:34.920760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.885 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.885 [2024-04-26 20:44:35.039624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.885 [2024-04-26 20:44:35.140200] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:16.885 [2024-04-26 20:44:35.140371] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.885 [2024-04-26 20:44:35.140387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.885 [2024-04-26 20:44:35.140397] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.885 [2024-04-26 20:44:35.140544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.885 [2024-04-26 20:44:35.140650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.885 [2024-04-26 20:44:35.140750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.885 [2024-04-26 20:44:35.140759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.454 20:44:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:17.454 20:44:35 -- common/autotest_common.sh@852 -- # return 0 00:27:17.454 20:44:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:17.454 20:44:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:17.454 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.454 20:44:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.454 20:44:35 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.454 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.454 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.454 [2024-04-26 20:44:35.659399] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.454 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.454 20:44:35 -- target/multiconnection.sh@21 -- # seq 1 11 00:27:17.454 20:44:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.454 20:44:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:17.454 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.454 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.454 Malloc1 00:27:17.454 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.454 20:44:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:17.454 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.454 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.454 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.454 20:44:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:17.454 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.454 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.454 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.454 20:44:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.454 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.454 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.454 [2024-04-26 20:44:35.731760] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.454 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.455 20:44:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.455 20:44:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:17.455 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.455 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.455 Malloc2 00:27:17.455 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.455 20:44:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:17.455 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.455 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.455 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.455 20:44:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:17.455 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.455 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.455 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.455 20:44:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:17.455 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.455 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.455 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.455 20:44:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.455 20:44:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 Malloc3 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.716 20:44:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 Malloc4 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.716 20:44:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 Malloc5 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.716 20:44:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:17.716 20:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 Malloc6 00:27:17.716 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:36 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:17.716 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:36 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:17.716 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.716 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.716 20:44:36 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:17.716 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.717 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.717 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.717 20:44:36 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.717 20:44:36 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:17.717 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.717 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 Malloc7 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.977 20:44:36 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:17.977 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.977 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.977 20:44:36 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:17.977 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.977 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.977 20:44:36 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:17.977 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.977 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.977 20:44:36 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.977 20:44:36 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:17.977 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.977 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 Malloc8 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.977 20:44:36 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:17.977 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.977 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.977 20:44:36 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:17.977 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.977 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.977 20:44:36 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:17.977 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.977 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.977 20:44:36 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.977 20:44:36 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:17.977 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.977 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 Malloc9 00:27:17.977 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.978 20:44:36 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:17.978 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.978 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.978 20:44:36 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:17.978 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.978 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.978 20:44:36 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:17.978 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.978 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.978 20:44:36 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.978 20:44:36 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:17.978 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.978 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 Malloc10 00:27:17.978 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.978 20:44:36 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:17.978 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.978 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.978 20:44:36 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:17.978 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.978 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.978 20:44:36 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:17.978 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.978 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.978 20:44:36 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.978 20:44:36 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:17.978 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.978 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:18.237 Malloc11 00:27:18.237 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.237 20:44:36 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:18.237 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.237 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:18.237 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.237 20:44:36 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:18.237 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.237 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:18.237 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.237 20:44:36 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:18.237 20:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.237 20:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:18.237 20:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.237 20:44:36 -- target/multiconnection.sh@28 -- # seq 1 11 00:27:18.237 20:44:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:18.237 20:44:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:19.613 20:44:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:19.613 20:44:37 -- common/autotest_common.sh@1177 -- # local i=0 00:27:19.613 20:44:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:19.613 20:44:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:19.613 20:44:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:21.522 20:44:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:21.522 20:44:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:21.522 20:44:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:27:21.522 20:44:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:21.522 20:44:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:21.522 20:44:39 -- common/autotest_common.sh@1187 -- # return 0 00:27:21.522 20:44:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:21.522 20:44:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:22.901 20:44:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:22.901 20:44:41 -- common/autotest_common.sh@1177 -- # local i=0 00:27:22.901 20:44:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:22.901 20:44:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:22.901 20:44:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:25.438 20:44:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:25.438 20:44:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:25.438 20:44:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:27:25.438 20:44:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:25.438 20:44:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:25.438 20:44:43 -- common/autotest_common.sh@1187 -- # return 0 00:27:25.438 20:44:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.438 20:44:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:26.820 20:44:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:26.820 20:44:44 -- common/autotest_common.sh@1177 -- # local i=0 00:27:26.820 20:44:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:26.820 20:44:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:26.820 20:44:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:28.727 20:44:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:28.727 20:44:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:28.727 20:44:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:27:28.727 20:44:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:28.727 20:44:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:28.727 20:44:46 -- common/autotest_common.sh@1187 -- # return 0 00:27:28.727 20:44:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.727 20:44:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:30.103 20:44:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:30.103 20:44:48 -- common/autotest_common.sh@1177 -- # local i=0 00:27:30.103 20:44:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:30.103 20:44:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:30.103 20:44:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:32.638 20:44:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:32.638 20:44:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:32.638 20:44:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:27:32.638 20:44:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:32.638 20:44:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:32.638 20:44:50 -- common/autotest_common.sh@1187 -- # return 0 00:27:32.638 20:44:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:32.638 20:44:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:34.014 20:44:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:34.014 20:44:52 -- common/autotest_common.sh@1177 -- # local i=0 00:27:34.014 20:44:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:34.014 20:44:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:34.014 20:44:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:35.918 20:44:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:35.918 20:44:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:35.918 20:44:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:27:35.918 20:44:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:35.918 20:44:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:35.918 20:44:54 -- common/autotest_common.sh@1187 -- # return 0 00:27:35.918 20:44:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:35.918 20:44:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:37.824 20:44:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:37.824 20:44:55 -- common/autotest_common.sh@1177 -- # local i=0 00:27:37.824 20:44:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:37.824 20:44:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:37.824 20:44:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:39.858 20:44:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:39.858 20:44:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:39.858 20:44:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:27:39.858 20:44:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:39.858 20:44:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:39.858 20:44:57 -- common/autotest_common.sh@1187 -- # return 0 00:27:39.858 20:44:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.858 20:44:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:41.766 20:44:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:41.766 20:44:59 -- common/autotest_common.sh@1177 -- # local i=0 00:27:41.766 20:44:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:41.766 20:44:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:41.766 20:44:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:43.668 20:45:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:43.668 20:45:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:43.668 20:45:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:27:43.668 20:45:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:43.668 20:45:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:43.668 20:45:01 -- common/autotest_common.sh@1187 -- # return 0 00:27:43.668 20:45:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:43.668 20:45:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:45.050 20:45:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:45.050 20:45:03 -- common/autotest_common.sh@1177 -- # local i=0 00:27:45.050 20:45:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:45.050 20:45:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:45.050 20:45:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:47.581 20:45:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:47.581 20:45:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:47.581 20:45:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:27:47.581 20:45:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:47.581 20:45:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:47.581 20:45:05 -- common/autotest_common.sh@1187 -- # return 0 00:27:47.581 20:45:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.581 20:45:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:48.963 20:45:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:48.963 20:45:07 -- common/autotest_common.sh@1177 -- # local i=0 00:27:48.963 20:45:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:48.963 20:45:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:48.963 20:45:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:50.866 20:45:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:50.866 20:45:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:50.866 20:45:09 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:27:50.866 20:45:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:50.866 20:45:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:50.866 20:45:09 -- common/autotest_common.sh@1187 -- # return 0 00:27:50.866 20:45:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.866 20:45:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:52.771 20:45:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:52.771 20:45:10 -- common/autotest_common.sh@1177 -- # local i=0 00:27:52.771 20:45:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:52.771 20:45:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:52.771 20:45:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:54.682 20:45:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:54.682 20:45:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:54.682 20:45:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:27:54.682 20:45:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:54.682 20:45:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:54.682 20:45:12 -- common/autotest_common.sh@1187 -- # return 0 00:27:54.682 20:45:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:54.682 20:45:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:56.587 20:45:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:56.587 20:45:14 -- common/autotest_common.sh@1177 -- # local i=0 00:27:56.587 20:45:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:56.587 20:45:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:56.587 20:45:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:59.131 20:45:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:59.131 20:45:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:59.131 20:45:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:27:59.131 20:45:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:59.131 20:45:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:59.131 20:45:16 -- common/autotest_common.sh@1187 -- # return 0 00:27:59.131 20:45:16 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:59.131 [global] 00:27:59.131 thread=1 00:27:59.131 invalidate=1 00:27:59.131 rw=read 00:27:59.131 time_based=1 00:27:59.131 runtime=10 00:27:59.131 ioengine=libaio 00:27:59.131 direct=1 00:27:59.131 bs=262144 00:27:59.131 iodepth=64 00:27:59.131 norandommap=1 00:27:59.131 numjobs=1 00:27:59.131 00:27:59.131 [job0] 00:27:59.131 filename=/dev/nvme0n1 00:27:59.131 [job1] 00:27:59.131 filename=/dev/nvme10n1 00:27:59.131 [job2] 00:27:59.131 filename=/dev/nvme1n1 00:27:59.131 [job3] 00:27:59.131 filename=/dev/nvme2n1 00:27:59.131 [job4] 00:27:59.131 filename=/dev/nvme3n1 00:27:59.131 [job5] 00:27:59.131 filename=/dev/nvme4n1 00:27:59.131 [job6] 00:27:59.131 filename=/dev/nvme5n1 00:27:59.131 [job7] 00:27:59.131 filename=/dev/nvme6n1 00:27:59.131 [job8] 00:27:59.131 filename=/dev/nvme7n1 00:27:59.131 [job9] 00:27:59.131 filename=/dev/nvme8n1 00:27:59.131 [job10] 00:27:59.131 filename=/dev/nvme9n1 00:27:59.131 Could not set queue depth (nvme0n1) 00:27:59.131 Could not set queue depth (nvme10n1) 00:27:59.131 Could not set queue depth (nvme1n1) 00:27:59.131 Could not set queue depth (nvme2n1) 00:27:59.131 Could not set queue depth (nvme3n1) 00:27:59.131 Could not set queue depth (nvme4n1) 00:27:59.131 Could not set queue depth (nvme5n1) 00:27:59.131 Could not set queue depth (nvme6n1) 00:27:59.131 Could not set queue depth (nvme7n1) 00:27:59.131 Could not set queue depth (nvme8n1) 00:27:59.131 Could not set queue depth (nvme9n1) 00:27:59.394 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:59.394 fio-3.35 00:27:59.394 Starting 11 threads 00:28:11.599 00:28:11.599 job0: (groupid=0, jobs=1): err= 0: pid=3668334: Fri Apr 26 20:45:28 2024 00:28:11.599 read: IOPS=779, BW=195MiB/s (204MB/s)(1976MiB/10140msec) 00:28:11.599 slat (usec): min=5, max=106562, avg=719.28, stdev=4110.30 00:28:11.599 clat (usec): min=1029, max=288774, avg=81337.77, stdev=60738.30 00:28:11.599 lat (usec): min=1045, max=333511, avg=82057.05, stdev=61399.25 00:28:11.599 clat percentiles (msec): 00:28:11.599 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 28], 00:28:11.599 | 30.00th=[ 37], 40.00th=[ 51], 50.00th=[ 62], 60.00th=[ 84], 00:28:11.599 | 70.00th=[ 107], 80.00th=[ 138], 90.00th=[ 178], 95.00th=[ 203], 00:28:11.599 | 99.00th=[ 230], 99.50th=[ 234], 99.90th=[ 243], 99.95th=[ 266], 00:28:11.599 | 99.99th=[ 288] 00:28:11.599 bw ( KiB/s): min=86016, max=373760, per=9.99%, avg=200624.55, stdev=91133.42, samples=20 00:28:11.599 iops : min= 336, max= 1460, avg=783.60, stdev=356.07, samples=20 00:28:11.599 lat (msec) : 2=0.33%, 4=2.09%, 10=3.32%, 20=7.80%, 50=25.79% 00:28:11.599 lat (msec) : 100=28.46%, 250=32.16%, 500=0.06% 00:28:11.599 cpu : usr=0.15%, sys=1.80%, ctx=1754, majf=0, minf=4097 00:28:11.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:11.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.599 issued rwts: total=7902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.599 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.599 job1: (groupid=0, jobs=1): err= 0: pid=3668360: Fri Apr 26 20:45:28 2024 00:28:11.599 read: IOPS=663, BW=166MiB/s (174MB/s)(1673MiB/10094msec) 00:28:11.600 slat (usec): min=6, max=145980, avg=948.04, stdev=6509.79 00:28:11.600 clat (msec): min=3, max=313, avg=95.50, stdev=69.01 00:28:11.600 lat (msec): min=3, max=323, avg=96.44, stdev=70.03 00:28:11.600 clat percentiles (msec): 00:28:11.600 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 17], 20.00th=[ 28], 00:28:11.600 | 30.00th=[ 40], 40.00th=[ 56], 50.00th=[ 77], 60.00th=[ 112], 00:28:11.600 | 70.00th=[ 142], 80.00th=[ 171], 90.00th=[ 199], 95.00th=[ 211], 00:28:11.600 | 99.00th=[ 232], 99.50th=[ 241], 99.90th=[ 288], 99.95th=[ 300], 00:28:11.600 | 99.99th=[ 313] 00:28:11.600 bw ( KiB/s): min=80896, max=298496, per=8.45%, avg=169757.65, stdev=62489.63, samples=20 00:28:11.600 iops : min= 316, max= 1166, avg=663.05, stdev=244.14, samples=20 00:28:11.600 lat (msec) : 4=0.04%, 10=5.66%, 20=7.95%, 50=23.34%, 100=21.01% 00:28:11.600 lat (msec) : 250=41.72%, 500=0.28% 00:28:11.600 cpu : usr=0.11%, sys=1.64%, ctx=1555, majf=0, minf=4097 00:28:11.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:11.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.600 issued rwts: total=6693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.600 job2: (groupid=0, jobs=1): err= 0: pid=3668394: Fri Apr 26 20:45:28 2024 00:28:11.600 read: IOPS=451, BW=113MiB/s (118MB/s)(1141MiB/10114msec) 00:28:11.600 slat (usec): min=7, max=154603, avg=2135.45, stdev=7467.38 00:28:11.600 clat (msec): min=51, max=325, avg=139.60, stdev=45.46 00:28:11.600 lat (msec): min=51, max=327, avg=141.74, stdev=46.52 00:28:11.600 clat percentiles (msec): 00:28:11.600 | 1.00th=[ 59], 5.00th=[ 71], 10.00th=[ 77], 20.00th=[ 91], 00:28:11.600 | 30.00th=[ 109], 40.00th=[ 126], 50.00th=[ 140], 60.00th=[ 159], 00:28:11.600 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 199], 95.00th=[ 207], 00:28:11.600 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 262], 99.95th=[ 300], 00:28:11.600 | 99.99th=[ 326] 00:28:11.600 bw ( KiB/s): min=76800, max=200192, per=5.74%, avg=115212.20, stdev=37783.63, samples=20 00:28:11.600 iops : min= 300, max= 782, avg=449.95, stdev=147.62, samples=20 00:28:11.600 lat (msec) : 100=25.20%, 250=74.60%, 500=0.20% 00:28:11.600 cpu : usr=0.08%, sys=1.45%, ctx=935, majf=0, minf=4097 00:28:11.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:11.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.600 issued rwts: total=4563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.600 job3: (groupid=0, jobs=1): err= 0: pid=3668409: Fri Apr 26 20:45:28 2024 00:28:11.600 read: IOPS=660, BW=165MiB/s (173MB/s)(1666MiB/10096msec) 00:28:11.600 slat (usec): min=7, max=149012, avg=1129.70, stdev=5033.46 00:28:11.600 clat (msec): min=4, max=340, avg=95.75, stdev=55.79 00:28:11.600 lat (msec): min=4, max=351, avg=96.88, stdev=56.44 00:28:11.600 clat percentiles (msec): 00:28:11.600 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 35], 20.00th=[ 53], 00:28:11.600 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 93], 00:28:11.600 | 70.00th=[ 108], 80.00th=[ 153], 90.00th=[ 190], 95.00th=[ 209], 00:28:11.600 | 99.00th=[ 226], 99.50th=[ 232], 99.90th=[ 255], 99.95th=[ 279], 00:28:11.600 | 99.99th=[ 342] 00:28:11.600 bw ( KiB/s): min=78336, max=335872, per=8.41%, avg=168951.50, stdev=64410.14, samples=20 00:28:11.600 iops : min= 306, max= 1312, avg=659.90, stdev=251.60, samples=20 00:28:11.600 lat (msec) : 10=1.89%, 20=2.75%, 50=13.46%, 100=47.69%, 250=34.09% 00:28:11.600 lat (msec) : 500=0.12% 00:28:11.600 cpu : usr=0.14%, sys=2.03%, ctx=1361, majf=0, minf=3597 00:28:11.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:11.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.600 issued rwts: total=6664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.600 job4: (groupid=0, jobs=1): err= 0: pid=3668420: Fri Apr 26 20:45:28 2024 00:28:11.600 read: IOPS=619, BW=155MiB/s (162MB/s)(1564MiB/10096msec) 00:28:11.600 slat (usec): min=5, max=234394, avg=1047.70, stdev=6113.57 00:28:11.600 clat (usec): min=970, max=300979, avg=102199.90, stdev=67817.24 00:28:11.600 lat (usec): min=998, max=436035, avg=103247.61, stdev=68710.88 00:28:11.600 clat percentiles (msec): 00:28:11.600 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 16], 20.00th=[ 35], 00:28:11.600 | 30.00th=[ 53], 40.00th=[ 77], 50.00th=[ 92], 60.00th=[ 111], 00:28:11.600 | 70.00th=[ 150], 80.00th=[ 178], 90.00th=[ 199], 95.00th=[ 209], 00:28:11.600 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 292], 00:28:11.600 | 99.99th=[ 300] 00:28:11.600 bw ( KiB/s): min=81408, max=381440, per=7.90%, avg=158518.05, stdev=72788.90, samples=20 00:28:11.600 iops : min= 318, max= 1490, avg=619.15, stdev=284.32, samples=20 00:28:11.600 lat (usec) : 1000=0.02% 00:28:11.600 lat (msec) : 2=0.35%, 4=1.06%, 10=3.85%, 20=7.47%, 50=16.05% 00:28:11.600 lat (msec) : 100=26.56%, 250=43.68%, 500=0.96% 00:28:11.600 cpu : usr=0.17%, sys=1.56%, ctx=1413, majf=0, minf=4097 00:28:11.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:11.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.600 issued rwts: total=6254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.600 job5: (groupid=0, jobs=1): err= 0: pid=3668461: Fri Apr 26 20:45:28 2024 00:28:11.600 read: IOPS=716, BW=179MiB/s (188MB/s)(1809MiB/10098msec) 00:28:11.600 slat (usec): min=5, max=173703, avg=847.94, stdev=5033.76 00:28:11.600 clat (usec): min=1044, max=332143, avg=88417.92, stdev=60555.46 00:28:11.600 lat (usec): min=1072, max=332167, avg=89265.87, stdev=61262.40 00:28:11.600 clat percentiles (msec): 00:28:11.600 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 17], 20.00th=[ 31], 00:28:11.600 | 30.00th=[ 50], 40.00th=[ 62], 50.00th=[ 75], 60.00th=[ 94], 00:28:11.600 | 70.00th=[ 115], 80.00th=[ 144], 90.00th=[ 186], 95.00th=[ 203], 00:28:11.600 | 99.00th=[ 232], 99.50th=[ 268], 99.90th=[ 279], 99.95th=[ 288], 00:28:11.600 | 99.99th=[ 334] 00:28:11.600 bw ( KiB/s): min=74752, max=380928, per=9.15%, avg=183670.15, stdev=84105.66, samples=20 00:28:11.600 iops : min= 292, max= 1488, avg=717.40, stdev=328.52, samples=20 00:28:11.600 lat (msec) : 2=0.12%, 4=0.26%, 10=2.68%, 20=9.90%, 50=17.98% 00:28:11.600 lat (msec) : 100=32.56%, 250=35.94%, 500=0.55% 00:28:11.600 cpu : usr=0.09%, sys=1.94%, ctx=1585, majf=0, minf=4097 00:28:11.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:28:11.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.600 issued rwts: total=7235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.600 job6: (groupid=0, jobs=1): err= 0: pid=3668483: Fri Apr 26 20:45:28 2024 00:28:11.600 read: IOPS=499, BW=125MiB/s (131MB/s)(1251MiB/10017msec) 00:28:11.600 slat (usec): min=8, max=119419, avg=1581.21, stdev=6723.23 00:28:11.600 clat (msec): min=3, max=301, avg=126.45, stdev=66.28 00:28:11.600 lat (msec): min=3, max=303, avg=128.03, stdev=67.47 00:28:11.600 clat percentiles (msec): 00:28:11.600 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 59], 00:28:11.600 | 30.00th=[ 73], 40.00th=[ 105], 50.00th=[ 140], 60.00th=[ 165], 00:28:11.600 | 70.00th=[ 178], 80.00th=[ 192], 90.00th=[ 205], 95.00th=[ 215], 00:28:11.600 | 99.00th=[ 232], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 292], 00:28:11.600 | 99.99th=[ 300] 00:28:11.600 bw ( KiB/s): min=81920, max=287232, per=6.30%, avg=126505.85, stdev=58556.80, samples=20 00:28:11.600 iops : min= 320, max= 1122, avg=494.15, stdev=228.74, samples=20 00:28:11.600 lat (msec) : 4=0.04%, 10=1.56%, 20=4.72%, 50=9.99%, 100=22.10% 00:28:11.600 lat (msec) : 250=61.27%, 500=0.32% 00:28:11.600 cpu : usr=0.09%, sys=1.44%, ctx=1152, majf=0, minf=4097 00:28:11.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:28:11.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.600 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.600 job7: (groupid=0, jobs=1): err= 0: pid=3668498: Fri Apr 26 20:45:28 2024 00:28:11.600 read: IOPS=441, BW=110MiB/s (116MB/s)(1113MiB/10077msec) 00:28:11.600 slat (usec): min=8, max=65980, avg=1657.36, stdev=5450.23 00:28:11.600 clat (msec): min=4, max=306, avg=143.17, stdev=52.06 00:28:11.600 lat (msec): min=4, max=306, avg=144.83, stdev=52.79 00:28:11.600 clat percentiles (msec): 00:28:11.600 | 1.00th=[ 19], 5.00th=[ 55], 10.00th=[ 71], 20.00th=[ 92], 00:28:11.600 | 30.00th=[ 115], 40.00th=[ 134], 50.00th=[ 155], 60.00th=[ 167], 00:28:11.600 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 203], 95.00th=[ 218], 00:28:11.600 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 275], 99.95th=[ 275], 00:28:11.600 | 99.99th=[ 309] 00:28:11.600 bw ( KiB/s): min=83968, max=214957, per=5.59%, avg=112279.35, stdev=34742.15, samples=20 00:28:11.600 iops : min= 328, max= 839, avg=438.50, stdev=135.58, samples=20 00:28:11.600 lat (msec) : 10=0.34%, 20=0.97%, 50=3.12%, 100=20.09%, 250=74.70% 00:28:11.600 lat (msec) : 500=0.79% 00:28:11.600 cpu : usr=0.10%, sys=1.36%, ctx=1121, majf=0, minf=4097 00:28:11.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:11.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.600 issued rwts: total=4450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.600 job8: (groupid=0, jobs=1): err= 0: pid=3668548: Fri Apr 26 20:45:28 2024 00:28:11.600 read: IOPS=1276, BW=319MiB/s (335MB/s)(3196MiB/10018msec) 00:28:11.600 slat (usec): min=7, max=114191, avg=595.30, stdev=2406.72 00:28:11.600 clat (msec): min=6, max=261, avg=49.52, stdev=38.61 00:28:11.600 lat (msec): min=6, max=306, avg=50.12, stdev=38.83 00:28:11.600 clat percentiles (msec): 00:28:11.601 | 1.00th=[ 16], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 31], 00:28:11.601 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 35], 60.00th=[ 39], 00:28:11.601 | 70.00th=[ 46], 80.00th=[ 57], 90.00th=[ 82], 95.00th=[ 169], 00:28:11.601 | 99.00th=[ 203], 99.50th=[ 220], 99.90th=[ 236], 99.95th=[ 241], 00:28:11.601 | 99.99th=[ 262] 00:28:11.601 bw ( KiB/s): min=94720, max=519680, per=16.22%, avg=325588.95, stdev=143063.12, samples=20 00:28:11.601 iops : min= 370, max= 2030, avg=1271.80, stdev=558.85, samples=20 00:28:11.601 lat (msec) : 10=0.30%, 20=1.13%, 50=73.22%, 100=18.11%, 250=7.24% 00:28:11.601 lat (msec) : 500=0.02% 00:28:11.601 cpu : usr=0.11%, sys=2.61%, ctx=2543, majf=0, minf=4097 00:28:11.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:11.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.601 issued rwts: total=12784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.601 job9: (groupid=0, jobs=1): err= 0: pid=3668550: Fri Apr 26 20:45:28 2024 00:28:11.601 read: IOPS=519, BW=130MiB/s (136MB/s)(1308MiB/10076msec) 00:28:11.601 slat (usec): min=7, max=114555, avg=1897.30, stdev=5724.35 00:28:11.601 clat (msec): min=22, max=321, avg=121.22, stdev=57.44 00:28:11.601 lat (msec): min=23, max=321, avg=123.12, stdev=58.40 00:28:11.601 clat percentiles (msec): 00:28:11.601 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 68], 00:28:11.601 | 30.00th=[ 86], 40.00th=[ 103], 50.00th=[ 127], 60.00th=[ 142], 00:28:11.601 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 207], 00:28:11.601 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 245], 99.95th=[ 249], 00:28:11.601 | 99.99th=[ 321] 00:28:11.601 bw ( KiB/s): min=67584, max=395264, per=6.59%, avg=132356.05, stdev=73442.00, samples=20 00:28:11.601 iops : min= 264, max= 1544, avg=517.00, stdev=286.89, samples=20 00:28:11.601 lat (msec) : 50=17.33%, 100=22.32%, 250=60.31%, 500=0.04% 00:28:11.601 cpu : usr=0.06%, sys=1.43%, ctx=1069, majf=0, minf=4097 00:28:11.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:11.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.601 issued rwts: total=5233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.601 job10: (groupid=0, jobs=1): err= 0: pid=3668551: Fri Apr 26 20:45:28 2024 00:28:11.601 read: IOPS=1271, BW=318MiB/s (333MB/s)(3187MiB/10027msec) 00:28:11.601 slat (usec): min=6, max=120873, avg=649.75, stdev=3287.32 00:28:11.601 clat (msec): min=2, max=313, avg=49.66, stdev=45.05 00:28:11.601 lat (msec): min=2, max=331, avg=50.31, stdev=45.54 00:28:11.601 clat percentiles (msec): 00:28:11.601 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 28], 20.00th=[ 29], 00:28:11.601 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 33], 60.00th=[ 35], 00:28:11.601 | 70.00th=[ 40], 80.00th=[ 58], 90.00th=[ 87], 95.00th=[ 182], 00:28:11.601 | 99.00th=[ 222], 99.50th=[ 232], 99.90th=[ 243], 99.95th=[ 257], 00:28:11.601 | 99.99th=[ 313] 00:28:11.601 bw ( KiB/s): min=83456, max=544768, per=16.17%, avg=324659.10, stdev=171498.54, samples=20 00:28:11.601 iops : min= 326, max= 2128, avg=1268.10, stdev=669.94, samples=20 00:28:11.601 lat (msec) : 4=0.07%, 10=1.37%, 20=2.97%, 50=72.25%, 100=14.88% 00:28:11.601 lat (msec) : 250=8.39%, 500=0.06% 00:28:11.601 cpu : usr=0.14%, sys=2.71%, ctx=2490, majf=0, minf=4097 00:28:11.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:11.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.601 issued rwts: total=12746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.601 00:28:11.601 Run status group 0 (all jobs): 00:28:11.601 READ: bw=1961MiB/s (2056MB/s), 110MiB/s-319MiB/s (116MB/s-335MB/s), io=19.4GiB (20.8GB), run=10017-10140msec 00:28:11.601 00:28:11.601 Disk stats (read/write): 00:28:11.601 nvme0n1: ios=15676/0, merge=0/0, ticks=1249164/0, in_queue=1249164, util=95.35% 00:28:11.601 nvme10n1: ios=13244/0, merge=0/0, ticks=1245529/0, in_queue=1245529, util=95.68% 00:28:11.601 nvme1n1: ios=8992/0, merge=0/0, ticks=1238696/0, in_queue=1238696, util=96.23% 00:28:11.601 nvme2n1: ios=13192/0, merge=0/0, ticks=1244183/0, in_queue=1244183, util=96.56% 00:28:11.601 nvme3n1: ios=12378/0, merge=0/0, ticks=1245561/0, in_queue=1245561, util=96.68% 00:28:11.601 nvme4n1: ios=14309/0, merge=0/0, ticks=1246314/0, in_queue=1246314, util=97.29% 00:28:11.601 nvme5n1: ios=9825/0, merge=0/0, ticks=1240243/0, in_queue=1240243, util=97.61% 00:28:11.601 nvme6n1: ios=8755/0, merge=0/0, ticks=1239471/0, in_queue=1239471, util=97.86% 00:28:11.601 nvme7n1: ios=25416/0, merge=0/0, ticks=1252337/0, in_queue=1252337, util=98.63% 00:28:11.601 nvme8n1: ios=10298/0, merge=0/0, ticks=1231990/0, in_queue=1231990, util=98.95% 00:28:11.601 nvme9n1: ios=25344/0, merge=0/0, ticks=1247363/0, in_queue=1247363, util=99.21% 00:28:11.601 20:45:28 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:11.601 [global] 00:28:11.601 thread=1 00:28:11.601 invalidate=1 00:28:11.601 rw=randwrite 00:28:11.601 time_based=1 00:28:11.601 runtime=10 00:28:11.601 ioengine=libaio 00:28:11.601 direct=1 00:28:11.601 bs=262144 00:28:11.601 iodepth=64 00:28:11.601 norandommap=1 00:28:11.601 numjobs=1 00:28:11.601 00:28:11.601 [job0] 00:28:11.601 filename=/dev/nvme0n1 00:28:11.601 [job1] 00:28:11.601 filename=/dev/nvme10n1 00:28:11.601 [job2] 00:28:11.601 filename=/dev/nvme1n1 00:28:11.601 [job3] 00:28:11.601 filename=/dev/nvme2n1 00:28:11.601 [job4] 00:28:11.601 filename=/dev/nvme3n1 00:28:11.601 [job5] 00:28:11.601 filename=/dev/nvme4n1 00:28:11.601 [job6] 00:28:11.601 filename=/dev/nvme5n1 00:28:11.601 [job7] 00:28:11.601 filename=/dev/nvme6n1 00:28:11.601 [job8] 00:28:11.601 filename=/dev/nvme7n1 00:28:11.601 [job9] 00:28:11.601 filename=/dev/nvme8n1 00:28:11.601 [job10] 00:28:11.601 filename=/dev/nvme9n1 00:28:11.601 Could not set queue depth (nvme0n1) 00:28:11.601 Could not set queue depth (nvme10n1) 00:28:11.601 Could not set queue depth (nvme1n1) 00:28:11.601 Could not set queue depth (nvme2n1) 00:28:11.601 Could not set queue depth (nvme3n1) 00:28:11.601 Could not set queue depth (nvme4n1) 00:28:11.601 Could not set queue depth (nvme5n1) 00:28:11.601 Could not set queue depth (nvme6n1) 00:28:11.601 Could not set queue depth (nvme7n1) 00:28:11.601 Could not set queue depth (nvme8n1) 00:28:11.601 Could not set queue depth (nvme9n1) 00:28:11.601 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.601 fio-3.35 00:28:11.601 Starting 11 threads 00:28:21.586 00:28:21.586 job0: (groupid=0, jobs=1): err= 0: pid=3670788: Fri Apr 26 20:45:39 2024 00:28:21.586 write: IOPS=407, BW=102MiB/s (107MB/s)(1032MiB/10140msec); 0 zone resets 00:28:21.586 slat (usec): min=17, max=82789, avg=2402.70, stdev=4687.24 00:28:21.586 clat (msec): min=25, max=281, avg=154.75, stdev=28.58 00:28:21.586 lat (msec): min=25, max=281, avg=157.15, stdev=28.68 00:28:21.586 clat percentiles (msec): 00:28:21.586 | 1.00th=[ 68], 5.00th=[ 131], 10.00th=[ 136], 20.00th=[ 138], 00:28:21.586 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 148], 00:28:21.586 | 70.00th=[ 155], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 215], 00:28:21.586 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 271], 99.95th=[ 271], 00:28:21.586 | 99.99th=[ 284] 00:28:21.586 bw ( KiB/s): min=69632, max=120320, per=7.11%, avg=104038.40, stdev=14771.70, samples=20 00:28:21.586 iops : min= 272, max= 470, avg=406.40, stdev=57.70, samples=20 00:28:21.586 lat (msec) : 50=0.46%, 100=0.85%, 250=98.06%, 500=0.63% 00:28:21.586 cpu : usr=1.29%, sys=1.16%, ctx=1115, majf=0, minf=1 00:28:21.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:21.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.586 issued rwts: total=0,4127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.586 job1: (groupid=0, jobs=1): err= 0: pid=3670802: Fri Apr 26 20:45:39 2024 00:28:21.586 write: IOPS=411, BW=103MiB/s (108MB/s)(1043MiB/10142msec); 0 zone resets 00:28:21.586 slat (usec): min=20, max=113232, avg=2375.87, stdev=4662.30 00:28:21.586 clat (msec): min=3, max=280, avg=153.18, stdev=26.57 00:28:21.586 lat (msec): min=3, max=280, avg=155.55, stdev=26.54 00:28:21.586 clat percentiles (msec): 00:28:21.586 | 1.00th=[ 126], 5.00th=[ 132], 10.00th=[ 136], 20.00th=[ 140], 00:28:21.586 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 148], 00:28:21.586 | 70.00th=[ 150], 80.00th=[ 165], 90.00th=[ 190], 95.00th=[ 213], 00:28:21.586 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 271], 99.95th=[ 271], 00:28:21.586 | 99.99th=[ 279] 00:28:21.586 bw ( KiB/s): min=65667, max=114688, per=7.19%, avg=105120.15, stdev=14117.17, samples=20 00:28:21.586 iops : min= 256, max= 448, avg=410.60, stdev=55.22, samples=20 00:28:21.586 lat (msec) : 4=0.02%, 10=0.19%, 20=0.22%, 50=0.02%, 100=0.14% 00:28:21.586 lat (msec) : 250=98.99%, 500=0.41% 00:28:21.586 cpu : usr=1.28%, sys=1.10%, ctx=1130, majf=0, minf=1 00:28:21.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:21.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.586 issued rwts: total=0,4170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.586 job2: (groupid=0, jobs=1): err= 0: pid=3670803: Fri Apr 26 20:45:39 2024 00:28:21.586 write: IOPS=589, BW=147MiB/s (154MB/s)(1492MiB/10128msec); 0 zone resets 00:28:21.586 slat (usec): min=16, max=192237, avg=1674.26, stdev=4178.66 00:28:21.586 clat (msec): min=17, max=351, avg=106.90, stdev=26.61 00:28:21.586 lat (msec): min=17, max=351, avg=108.58, stdev=26.81 00:28:21.586 clat percentiles (msec): 00:28:21.586 | 1.00th=[ 72], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 94], 00:28:21.586 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 100], 00:28:21.586 | 70.00th=[ 106], 80.00th=[ 127], 90.00th=[ 136], 95.00th=[ 140], 00:28:21.586 | 99.00th=[ 241], 99.50th=[ 271], 99.90th=[ 347], 99.95th=[ 351], 00:28:21.586 | 99.99th=[ 351] 00:28:21.586 bw ( KiB/s): min=101888, max=172544, per=10.33%, avg=151131.50, stdev=24192.80, samples=20 00:28:21.586 iops : min= 398, max= 674, avg=590.35, stdev=94.50, samples=20 00:28:21.586 lat (msec) : 20=0.07%, 50=0.40%, 100=62.47%, 250=36.26%, 500=0.80% 00:28:21.586 cpu : usr=1.64%, sys=1.48%, ctx=1513, majf=0, minf=1 00:28:21.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:28:21.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.586 issued rwts: total=0,5966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.586 job3: (groupid=0, jobs=1): err= 0: pid=3670806: Fri Apr 26 20:45:39 2024 00:28:21.586 write: IOPS=549, BW=137MiB/s (144MB/s)(1392MiB/10128msec); 0 zone resets 00:28:21.586 slat (usec): min=15, max=38048, avg=1742.65, stdev=3183.85 00:28:21.586 clat (msec): min=5, max=261, avg=114.62, stdev=22.29 00:28:21.586 lat (msec): min=5, max=261, avg=116.36, stdev=22.46 00:28:21.586 clat percentiles (msec): 00:28:21.586 | 1.00th=[ 31], 5.00th=[ 79], 10.00th=[ 91], 20.00th=[ 99], 00:28:21.586 | 30.00th=[ 113], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 123], 00:28:21.586 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 136], 00:28:21.587 | 99.00th=[ 146], 99.50th=[ 201], 99.90th=[ 251], 99.95th=[ 253], 00:28:21.587 | 99.99th=[ 262] 00:28:21.587 bw ( KiB/s): min=119296, max=178176, per=9.63%, avg=140917.15, stdev=17962.01, samples=20 00:28:21.587 iops : min= 466, max= 696, avg=550.45, stdev=70.16, samples=20 00:28:21.587 lat (msec) : 10=0.11%, 20=0.40%, 50=1.62%, 100=18.48%, 250=79.29% 00:28:21.587 lat (msec) : 500=0.11% 00:28:21.587 cpu : usr=1.59%, sys=1.43%, ctx=1629, majf=0, minf=1 00:28:21.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.587 issued rwts: total=0,5567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.587 job4: (groupid=0, jobs=1): err= 0: pid=3670807: Fri Apr 26 20:45:39 2024 00:28:21.587 write: IOPS=440, BW=110MiB/s (116MB/s)(1118MiB/10142msec); 0 zone resets 00:28:21.587 slat (usec): min=19, max=97394, avg=2120.24, stdev=4299.73 00:28:21.587 clat (msec): min=6, max=281, avg=143.00, stdev=34.05 00:28:21.587 lat (msec): min=6, max=281, avg=145.12, stdev=34.39 00:28:21.587 clat percentiles (msec): 00:28:21.587 | 1.00th=[ 38], 5.00th=[ 87], 10.00th=[ 94], 20.00th=[ 134], 00:28:21.587 | 30.00th=[ 140], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 148], 00:28:21.587 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 171], 95.00th=[ 201], 00:28:21.587 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 271], 00:28:21.587 | 99.99th=[ 284] 00:28:21.587 bw ( KiB/s): min=67584, max=160256, per=7.71%, avg=112844.80, stdev=18916.96, samples=20 00:28:21.587 iops : min= 264, max= 626, avg=440.80, stdev=73.89, samples=20 00:28:21.587 lat (msec) : 10=0.04%, 20=0.27%, 50=1.50%, 100=10.18%, 250=87.41% 00:28:21.587 lat (msec) : 500=0.60% 00:28:21.587 cpu : usr=1.30%, sys=1.71%, ctx=1436, majf=0, minf=1 00:28:21.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.587 issued rwts: total=0,4471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.587 job5: (groupid=0, jobs=1): err= 0: pid=3670808: Fri Apr 26 20:45:39 2024 00:28:21.587 write: IOPS=633, BW=158MiB/s (166MB/s)(1599MiB/10097msec); 0 zone resets 00:28:21.587 slat (usec): min=21, max=43737, avg=1560.86, stdev=2763.20 00:28:21.587 clat (msec): min=9, max=194, avg=99.10, stdev=16.55 00:28:21.587 lat (msec): min=9, max=194, avg=100.66, stdev=16.56 00:28:21.587 clat percentiles (msec): 00:28:21.587 | 1.00th=[ 48], 5.00th=[ 68], 10.00th=[ 72], 20.00th=[ 94], 00:28:21.587 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 106], 00:28:21.587 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 111], 95.00th=[ 112], 00:28:21.587 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 182], 99.95th=[ 188], 00:28:21.587 | 99.99th=[ 194] 00:28:21.587 bw ( KiB/s): min=141312, max=225280, per=11.08%, avg=162145.25, stdev=23305.47, samples=20 00:28:21.587 iops : min= 552, max= 880, avg=633.35, stdev=90.98, samples=20 00:28:21.587 lat (msec) : 10=0.06%, 20=0.13%, 50=0.81%, 100=28.91%, 250=70.09% 00:28:21.587 cpu : usr=1.77%, sys=1.59%, ctx=1634, majf=0, minf=1 00:28:21.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.587 issued rwts: total=0,6396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.587 job6: (groupid=0, jobs=1): err= 0: pid=3670809: Fri Apr 26 20:45:39 2024 00:28:21.587 write: IOPS=603, BW=151MiB/s (158MB/s)(1523MiB/10097msec); 0 zone resets 00:28:21.587 slat (usec): min=17, max=13413, avg=1557.44, stdev=2770.64 00:28:21.587 clat (msec): min=3, max=204, avg=104.50, stdev=17.84 00:28:21.587 lat (msec): min=3, max=204, avg=106.06, stdev=17.86 00:28:21.587 clat percentiles (msec): 00:28:21.587 | 1.00th=[ 24], 5.00th=[ 91], 10.00th=[ 96], 20.00th=[ 101], 00:28:21.587 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 108], 00:28:21.587 | 70.00th=[ 109], 80.00th=[ 110], 90.00th=[ 112], 95.00th=[ 131], 00:28:21.587 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 199], 00:28:21.587 | 99.99th=[ 205] 00:28:21.587 bw ( KiB/s): min=136192, max=190976, per=10.55%, avg=154332.60, stdev=10831.46, samples=20 00:28:21.587 iops : min= 532, max= 746, avg=602.85, stdev=42.31, samples=20 00:28:21.587 lat (msec) : 4=0.05%, 10=0.43%, 20=0.30%, 50=1.44%, 100=18.57% 00:28:21.587 lat (msec) : 250=79.22% 00:28:21.587 cpu : usr=1.91%, sys=1.23%, ctx=1842, majf=0, minf=1 00:28:21.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.587 issued rwts: total=0,6091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.587 job7: (groupid=0, jobs=1): err= 0: pid=3670811: Fri Apr 26 20:45:39 2024 00:28:21.587 write: IOPS=585, BW=146MiB/s (153MB/s)(1476MiB/10084msec); 0 zone resets 00:28:21.587 slat (usec): min=17, max=16876, avg=1691.79, stdev=2923.98 00:28:21.587 clat (msec): min=12, max=173, avg=107.59, stdev=17.39 00:28:21.587 lat (msec): min=12, max=173, avg=109.28, stdev=17.41 00:28:21.587 clat percentiles (msec): 00:28:21.587 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 91], 00:28:21.587 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 112], 60.00th=[ 118], 00:28:21.587 | 70.00th=[ 122], 80.00th=[ 124], 90.00th=[ 126], 95.00th=[ 128], 00:28:21.587 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 00:28:21.587 | 99.99th=[ 174] 00:28:21.587 bw ( KiB/s): min=129024, max=184320, per=10.22%, avg=149529.60, stdev=19646.87, samples=20 00:28:21.587 iops : min= 504, max= 720, avg=584.10, stdev=76.75, samples=20 00:28:21.587 lat (msec) : 20=0.07%, 50=0.34%, 100=42.46%, 250=57.13% 00:28:21.587 cpu : usr=1.59%, sys=1.47%, ctx=1523, majf=0, minf=1 00:28:21.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:28:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.587 issued rwts: total=0,5904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.587 job8: (groupid=0, jobs=1): err= 0: pid=3670817: Fri Apr 26 20:45:39 2024 00:28:21.587 write: IOPS=408, BW=102MiB/s (107MB/s)(1037MiB/10144msec); 0 zone resets 00:28:21.587 slat (usec): min=18, max=57289, avg=2407.00, stdev=4474.43 00:28:21.587 clat (msec): min=42, max=279, avg=153.98, stdev=22.79 00:28:21.587 lat (msec): min=42, max=280, avg=156.39, stdev=22.70 00:28:21.587 clat percentiles (msec): 00:28:21.587 | 1.00th=[ 126], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 140], 00:28:21.587 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 150], 00:28:21.587 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 180], 95.00th=[ 209], 00:28:21.587 | 99.00th=[ 234], 99.50th=[ 239], 99.90th=[ 271], 99.95th=[ 271], 00:28:21.587 | 99.99th=[ 279] 00:28:21.587 bw ( KiB/s): min=72192, max=116736, per=7.15%, avg=104550.40, stdev=12279.69, samples=20 00:28:21.587 iops : min= 282, max= 456, avg=408.40, stdev=47.97, samples=20 00:28:21.587 lat (msec) : 50=0.10%, 100=0.31%, 250=99.25%, 500=0.34% 00:28:21.587 cpu : usr=1.27%, sys=1.60%, ctx=1084, majf=0, minf=1 00:28:21.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.587 issued rwts: total=0,4148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.587 job9: (groupid=0, jobs=1): err= 0: pid=3670818: Fri Apr 26 20:45:39 2024 00:28:21.587 write: IOPS=527, BW=132MiB/s (138MB/s)(1329MiB/10084msec); 0 zone resets 00:28:21.587 slat (usec): min=20, max=106962, avg=1711.48, stdev=4218.29 00:28:21.587 clat (msec): min=8, max=288, avg=119.63, stdev=36.08 00:28:21.587 lat (msec): min=8, max=288, avg=121.34, stdev=36.44 00:28:21.587 clat percentiles (msec): 00:28:21.587 | 1.00th=[ 39], 5.00th=[ 78], 10.00th=[ 86], 20.00th=[ 91], 00:28:21.587 | 30.00th=[ 94], 40.00th=[ 116], 50.00th=[ 121], 60.00th=[ 123], 00:28:21.587 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 176], 95.00th=[ 192], 00:28:21.587 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 284], 99.95th=[ 284], 00:28:21.587 | 99.99th=[ 288] 00:28:21.587 bw ( KiB/s): min=84992, max=182272, per=9.19%, avg=134451.20, stdev=29506.02, samples=20 00:28:21.587 iops : min= 332, max= 712, avg=525.20, stdev=115.26, samples=20 00:28:21.587 lat (msec) : 10=0.02%, 20=0.21%, 50=1.79%, 100=29.80%, 250=67.81% 00:28:21.587 lat (msec) : 500=0.38% 00:28:21.587 cpu : usr=1.54%, sys=1.32%, ctx=1759, majf=0, minf=1 00:28:21.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.587 issued rwts: total=0,5315,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.587 job10: (groupid=0, jobs=1): err= 0: pid=3670819: Fri Apr 26 20:45:39 2024 00:28:21.587 write: IOPS=574, BW=144MiB/s (150MB/s)(1454MiB/10128msec); 0 zone resets 00:28:21.587 slat (usec): min=18, max=148882, avg=1636.88, stdev=3617.74 00:28:21.587 clat (msec): min=8, max=304, avg=109.76, stdev=32.08 00:28:21.587 lat (msec): min=10, max=304, avg=111.40, stdev=32.34 00:28:21.587 clat percentiles (msec): 00:28:21.587 | 1.00th=[ 59], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 93], 00:28:21.587 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 99], 00:28:21.587 | 70.00th=[ 104], 80.00th=[ 131], 90.00th=[ 140], 95.00th=[ 169], 00:28:21.587 | 99.00th=[ 236], 99.50th=[ 247], 99.90th=[ 292], 99.95th=[ 305], 00:28:21.587 | 99.99th=[ 305] 00:28:21.587 bw ( KiB/s): min=57344, max=174080, per=10.06%, avg=147225.60, stdev=31573.81, samples=20 00:28:21.587 iops : min= 224, max= 680, avg=575.10, stdev=123.34, samples=20 00:28:21.587 lat (msec) : 10=0.02%, 20=0.19%, 50=0.60%, 100=64.71%, 250=34.06% 00:28:21.587 lat (msec) : 500=0.43% 00:28:21.587 cpu : usr=1.74%, sys=1.40%, ctx=1731, majf=0, minf=1 00:28:21.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.587 issued rwts: total=0,5814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.587 00:28:21.587 Run status group 0 (all jobs): 00:28:21.587 WRITE: bw=1429MiB/s (1498MB/s), 102MiB/s-158MiB/s (107MB/s-166MB/s), io=14.2GiB (15.2GB), run=10084-10144msec 00:28:21.587 00:28:21.587 Disk stats (read/write): 00:28:21.587 nvme0n1: ios=49/8212, merge=0/0, ticks=3528/1224106, in_queue=1227634, util=100.00% 00:28:21.587 nvme10n1: ios=47/8295, merge=0/0, ticks=1436/1225825, in_queue=1227261, util=100.00% 00:28:21.587 nvme1n1: ios=48/11894, merge=0/0, ticks=3045/1199452, in_queue=1202497, util=100.00% 00:28:21.587 nvme2n1: ios=46/11095, merge=0/0, ticks=1790/1226307, in_queue=1228097, util=100.00% 00:28:21.587 nvme3n1: ios=0/8896, merge=0/0, ticks=0/1226874, in_queue=1226874, util=97.38% 00:28:21.587 nvme4n1: ios=43/12786, merge=0/0, ticks=950/1221464, in_queue=1222414, util=100.00% 00:28:21.587 nvme5n1: ios=0/12176, merge=0/0, ticks=0/1230934, in_queue=1230934, util=98.02% 00:28:21.587 nvme6n1: ios=0/11468, merge=0/0, ticks=0/1199346, in_queue=1199346, util=98.15% 00:28:21.587 nvme7n1: ios=43/8250, merge=0/0, ticks=2690/1222489, in_queue=1225179, util=99.87% 00:28:21.587 nvme8n1: ios=44/10292, merge=0/0, ticks=2593/1187540, in_queue=1190133, util=99.88% 00:28:21.587 nvme9n1: ios=43/11592, merge=0/0, ticks=624/1228545, in_queue=1229169, util=99.91% 00:28:21.587 20:45:39 -- target/multiconnection.sh@36 -- # sync 00:28:21.587 20:45:39 -- target/multiconnection.sh@37 -- # seq 1 11 00:28:21.587 20:45:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:21.587 20:45:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:21.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:21.587 20:45:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:21.587 20:45:39 -- common/autotest_common.sh@1198 -- # local i=0 00:28:21.587 20:45:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:21.587 20:45:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:28:21.587 20:45:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:21.587 20:45:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:28:21.587 20:45:39 -- common/autotest_common.sh@1210 -- # return 0 00:28:21.587 20:45:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.587 20:45:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.587 20:45:39 -- common/autotest_common.sh@10 -- # set +x 00:28:21.587 20:45:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.587 20:45:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:21.588 20:45:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:22.157 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:22.157 20:45:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:22.157 20:45:40 -- common/autotest_common.sh@1198 -- # local i=0 00:28:22.157 20:45:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:22.157 20:45:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:28:22.157 20:45:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:22.157 20:45:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:28:22.157 20:45:40 -- common/autotest_common.sh@1210 -- # return 0 00:28:22.157 20:45:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:22.157 20:45:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.157 20:45:40 -- common/autotest_common.sh@10 -- # set +x 00:28:22.157 20:45:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.157 20:45:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:22.157 20:45:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:22.416 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:22.416 20:45:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:22.416 20:45:40 -- common/autotest_common.sh@1198 -- # local i=0 00:28:22.416 20:45:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:22.416 20:45:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:28:22.416 20:45:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:22.416 20:45:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:28:22.416 20:45:40 -- common/autotest_common.sh@1210 -- # return 0 00:28:22.416 20:45:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:22.416 20:45:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.416 20:45:40 -- common/autotest_common.sh@10 -- # set +x 00:28:22.416 20:45:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.416 20:45:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:22.416 20:45:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:22.989 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:22.989 20:45:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:22.989 20:45:41 -- common/autotest_common.sh@1198 -- # local i=0 00:28:22.989 20:45:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:22.989 20:45:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:28:22.989 20:45:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:22.989 20:45:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:28:22.989 20:45:41 -- common/autotest_common.sh@1210 -- # return 0 00:28:22.989 20:45:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:22.989 20:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.989 20:45:41 -- common/autotest_common.sh@10 -- # set +x 00:28:22.989 20:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.989 20:45:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:22.989 20:45:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:23.248 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:23.248 20:45:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:23.248 20:45:41 -- common/autotest_common.sh@1198 -- # local i=0 00:28:23.248 20:45:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:23.248 20:45:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:28:23.248 20:45:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:28:23.248 20:45:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:23.248 20:45:41 -- common/autotest_common.sh@1210 -- # return 0 00:28:23.248 20:45:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:23.248 20:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.248 20:45:41 -- common/autotest_common.sh@10 -- # set +x 00:28:23.248 20:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.248 20:45:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:23.248 20:45:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:23.506 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:23.506 20:45:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:23.506 20:45:41 -- common/autotest_common.sh@1198 -- # local i=0 00:28:23.506 20:45:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:28:23.506 20:45:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:23.506 20:45:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:23.506 20:45:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:28:23.506 20:45:41 -- common/autotest_common.sh@1210 -- # return 0 00:28:23.506 20:45:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:23.506 20:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.506 20:45:41 -- common/autotest_common.sh@10 -- # set +x 00:28:23.506 20:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.506 20:45:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:23.506 20:45:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:23.764 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:23.764 20:45:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:23.764 20:45:41 -- common/autotest_common.sh@1198 -- # local i=0 00:28:23.764 20:45:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:28:23.764 20:45:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:23.764 20:45:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:23.764 20:45:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:28:23.764 20:45:41 -- common/autotest_common.sh@1210 -- # return 0 00:28:23.764 20:45:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:23.764 20:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.764 20:45:41 -- common/autotest_common.sh@10 -- # set +x 00:28:23.765 20:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.765 20:45:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:23.765 20:45:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:24.023 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:24.023 20:45:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:24.023 20:45:42 -- common/autotest_common.sh@1198 -- # local i=0 00:28:24.023 20:45:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:24.023 20:45:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:28:24.023 20:45:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:24.023 20:45:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:28:24.023 20:45:42 -- common/autotest_common.sh@1210 -- # return 0 00:28:24.023 20:45:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:24.023 20:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.023 20:45:42 -- common/autotest_common.sh@10 -- # set +x 00:28:24.023 20:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.023 20:45:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:24.023 20:45:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:24.282 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:24.282 20:45:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:24.282 20:45:42 -- common/autotest_common.sh@1198 -- # local i=0 00:28:24.282 20:45:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:24.282 20:45:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:28:24.282 20:45:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:24.282 20:45:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:28:24.282 20:45:42 -- common/autotest_common.sh@1210 -- # return 0 00:28:24.282 20:45:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:24.282 20:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.282 20:45:42 -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 20:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.282 20:45:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:24.282 20:45:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:24.542 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:24.542 20:45:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:24.542 20:45:42 -- common/autotest_common.sh@1198 -- # local i=0 00:28:24.542 20:45:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:28:24.542 20:45:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:24.542 20:45:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:24.542 20:45:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:28:24.542 20:45:42 -- common/autotest_common.sh@1210 -- # return 0 00:28:24.542 20:45:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:24.542 20:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.542 20:45:42 -- common/autotest_common.sh@10 -- # set +x 00:28:24.542 20:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.542 20:45:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:24.542 20:45:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:24.804 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:24.804 20:45:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:24.804 20:45:42 -- common/autotest_common.sh@1198 -- # local i=0 00:28:24.804 20:45:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:24.804 20:45:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:28:24.804 20:45:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:24.804 20:45:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:28:24.804 20:45:42 -- common/autotest_common.sh@1210 -- # return 0 00:28:24.804 20:45:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:24.804 20:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.804 20:45:42 -- common/autotest_common.sh@10 -- # set +x 00:28:24.804 20:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.804 20:45:42 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:24.804 20:45:42 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:24.804 20:45:42 -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:24.804 20:45:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:24.804 20:45:42 -- nvmf/common.sh@116 -- # sync 00:28:24.804 20:45:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:24.804 20:45:42 -- nvmf/common.sh@119 -- # set +e 00:28:24.804 20:45:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:24.804 20:45:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:24.804 rmmod nvme_tcp 00:28:24.804 rmmod nvme_fabrics 00:28:24.804 rmmod nvme_keyring 00:28:24.804 20:45:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:24.804 20:45:43 -- nvmf/common.sh@123 -- # set -e 00:28:24.804 20:45:43 -- nvmf/common.sh@124 -- # return 0 00:28:24.804 20:45:43 -- nvmf/common.sh@477 -- # '[' -n 3659222 ']' 00:28:24.804 20:45:43 -- nvmf/common.sh@478 -- # killprocess 3659222 00:28:24.804 20:45:43 -- common/autotest_common.sh@926 -- # '[' -z 3659222 ']' 00:28:24.804 20:45:43 -- common/autotest_common.sh@930 -- # kill -0 3659222 00:28:24.804 20:45:43 -- common/autotest_common.sh@931 -- # uname 00:28:24.804 20:45:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:24.804 20:45:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3659222 00:28:24.804 20:45:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:24.804 20:45:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:24.804 20:45:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3659222' 00:28:24.804 killing process with pid 3659222 00:28:24.804 20:45:43 -- common/autotest_common.sh@945 -- # kill 3659222 00:28:24.804 20:45:43 -- common/autotest_common.sh@950 -- # wait 3659222 00:28:26.183 20:45:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:26.183 20:45:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:26.183 20:45:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:26.183 20:45:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.183 20:45:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:26.183 20:45:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.183 20:45:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.183 20:45:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.093 20:45:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:28.093 00:28:28.093 real 1m16.897s 00:28:28.093 user 5m3.139s 00:28:28.094 sys 0m19.844s 00:28:28.094 20:45:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.094 20:45:46 -- common/autotest_common.sh@10 -- # set +x 00:28:28.094 ************************************ 00:28:28.094 END TEST nvmf_multiconnection 00:28:28.094 ************************************ 00:28:28.094 20:45:46 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:28.094 20:45:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:28.094 20:45:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:28.094 20:45:46 -- common/autotest_common.sh@10 -- # set +x 00:28:28.094 ************************************ 00:28:28.094 START TEST nvmf_initiator_timeout 00:28:28.094 ************************************ 00:28:28.094 20:45:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:28.094 * Looking for test storage... 00:28:28.094 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:28:28.094 20:45:46 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.094 20:45:46 -- nvmf/common.sh@7 -- # uname -s 00:28:28.094 20:45:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.094 20:45:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.094 20:45:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.094 20:45:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.094 20:45:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.094 20:45:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.094 20:45:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.094 20:45:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.094 20:45:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.094 20:45:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.353 20:45:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:28:28.353 20:45:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:28:28.353 20:45:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.353 20:45:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.353 20:45:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:28.353 20:45:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:28.353 20:45:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.353 20:45:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.353 20:45:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.353 20:45:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.353 20:45:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.353 20:45:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.354 20:45:46 -- paths/export.sh@5 -- # export PATH 00:28:28.354 20:45:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.354 20:45:46 -- nvmf/common.sh@46 -- # : 0 00:28:28.354 20:45:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:28.354 20:45:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:28.354 20:45:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:28.354 20:45:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.354 20:45:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.354 20:45:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:28.354 20:45:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:28.354 20:45:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:28.354 20:45:46 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:28.354 20:45:46 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:28.354 20:45:46 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:28.354 20:45:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:28.354 20:45:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.354 20:45:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:28.354 20:45:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:28.354 20:45:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:28.354 20:45:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.354 20:45:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.354 20:45:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.354 20:45:46 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:28:28.354 20:45:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:28.354 20:45:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:28.354 20:45:46 -- common/autotest_common.sh@10 -- # set +x 00:28:33.741 20:45:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:33.741 20:45:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:33.741 20:45:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:33.741 20:45:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:33.741 20:45:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:33.741 20:45:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:33.741 20:45:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:33.741 20:45:51 -- nvmf/common.sh@294 -- # net_devs=() 00:28:33.741 20:45:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:33.741 20:45:51 -- nvmf/common.sh@295 -- # e810=() 00:28:33.741 20:45:51 -- nvmf/common.sh@295 -- # local -ga e810 00:28:33.741 20:45:51 -- nvmf/common.sh@296 -- # x722=() 00:28:33.741 20:45:51 -- nvmf/common.sh@296 -- # local -ga x722 00:28:33.741 20:45:51 -- nvmf/common.sh@297 -- # mlx=() 00:28:33.741 20:45:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:33.741 20:45:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.741 20:45:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:33.741 20:45:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:33.741 20:45:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:33.741 20:45:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:33.741 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:33.741 20:45:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:33.741 20:45:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:33.741 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:33.741 20:45:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:33.741 20:45:51 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:33.741 20:45:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.741 20:45:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:33.741 20:45:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.741 20:45:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:33.741 Found net devices under 0000:27:00.0: cvl_0_0 00:28:33.741 20:45:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.741 20:45:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:33.741 20:45:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.741 20:45:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:33.741 20:45:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.741 20:45:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:33.741 Found net devices under 0000:27:00.1: cvl_0_1 00:28:33.741 20:45:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.741 20:45:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:33.741 20:45:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:33.741 20:45:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:33.741 20:45:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:33.741 20:45:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.741 20:45:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.741 20:45:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.741 20:45:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:33.741 20:45:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.741 20:45:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.741 20:45:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:33.741 20:45:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.741 20:45:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.741 20:45:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:33.741 20:45:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:33.742 20:45:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.742 20:45:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.742 20:45:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.742 20:45:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.742 20:45:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:33.742 20:45:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.742 20:45:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.742 20:45:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.742 20:45:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:33.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:28:33.742 00:28:33.742 --- 10.0.0.2 ping statistics --- 00:28:33.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.742 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:28:33.742 20:45:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:28:33.742 00:28:33.742 --- 10.0.0.1 ping statistics --- 00:28:33.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.742 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:33.742 20:45:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.742 20:45:52 -- nvmf/common.sh@410 -- # return 0 00:28:33.742 20:45:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:33.742 20:45:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.742 20:45:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:33.742 20:45:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:33.742 20:45:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.742 20:45:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:33.742 20:45:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:33.742 20:45:52 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:33.742 20:45:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:33.742 20:45:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:33.742 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.742 20:45:52 -- nvmf/common.sh@469 -- # nvmfpid=3677741 00:28:33.742 20:45:52 -- nvmf/common.sh@470 -- # waitforlisten 3677741 00:28:33.742 20:45:52 -- common/autotest_common.sh@819 -- # '[' -z 3677741 ']' 00:28:33.742 20:45:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.742 20:45:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:33.742 20:45:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.742 20:45:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:33.742 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.742 20:45:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:34.000 [2024-04-26 20:45:52.138433] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:34.000 [2024-04-26 20:45:52.138545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.000 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.000 [2024-04-26 20:45:52.261086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.259 [2024-04-26 20:45:52.361110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:34.259 [2024-04-26 20:45:52.361286] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.259 [2024-04-26 20:45:52.361299] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.259 [2024-04-26 20:45:52.361308] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.259 [2024-04-26 20:45:52.361398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.259 [2024-04-26 20:45:52.361496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.259 [2024-04-26 20:45:52.361529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.259 [2024-04-26 20:45:52.361539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.517 20:45:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:34.517 20:45:52 -- common/autotest_common.sh@852 -- # return 0 00:28:34.517 20:45:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:34.517 20:45:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:34.517 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 20:45:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.778 20:45:52 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:34.778 20:45:52 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.778 20:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.778 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 Malloc0 00:28:34.778 20:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.778 20:45:52 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:34.778 20:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.778 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 Delay0 00:28:34.778 20:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.778 20:45:52 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.778 20:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.778 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 [2024-04-26 20:45:52.912467] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.778 20:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.778 20:45:52 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:34.778 20:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.778 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 20:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.778 20:45:52 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.778 20:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.778 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 20:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.778 20:45:52 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.778 20:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.778 20:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 [2024-04-26 20:45:52.940689] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.778 20:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.778 20:45:52 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:36.155 20:45:54 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:36.155 20:45:54 -- common/autotest_common.sh@1177 -- # local i=0 00:28:36.155 20:45:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:28:36.155 20:45:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:28:36.155 20:45:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:28:38.062 20:45:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:28:38.062 20:45:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:28:38.062 20:45:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:28:38.062 20:45:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:28:38.062 20:45:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:28:38.062 20:45:56 -- common/autotest_common.sh@1187 -- # return 0 00:28:38.062 20:45:56 -- target/initiator_timeout.sh@35 -- # fio_pid=3678417 00:28:38.062 20:45:56 -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:38.062 20:45:56 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:38.062 [global] 00:28:38.062 thread=1 00:28:38.062 invalidate=1 00:28:38.062 rw=write 00:28:38.062 time_based=1 00:28:38.062 runtime=60 00:28:38.062 ioengine=libaio 00:28:38.062 direct=1 00:28:38.062 bs=4096 00:28:38.062 iodepth=1 00:28:38.062 norandommap=0 00:28:38.062 numjobs=1 00:28:38.062 00:28:38.062 verify_dump=1 00:28:38.062 verify_backlog=512 00:28:38.062 verify_state_save=0 00:28:38.062 do_verify=1 00:28:38.062 verify=crc32c-intel 00:28:38.062 [job0] 00:28:38.062 filename=/dev/nvme0n1 00:28:38.062 Could not set queue depth (nvme0n1) 00:28:38.637 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:38.637 fio-3.35 00:28:38.637 Starting 1 thread 00:28:41.165 20:45:59 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:41.165 20:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.165 20:45:59 -- common/autotest_common.sh@10 -- # set +x 00:28:41.166 true 00:28:41.166 20:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.166 20:45:59 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:41.166 20:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.166 20:45:59 -- common/autotest_common.sh@10 -- # set +x 00:28:41.166 true 00:28:41.166 20:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.166 20:45:59 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:41.166 20:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.166 20:45:59 -- common/autotest_common.sh@10 -- # set +x 00:28:41.166 true 00:28:41.166 20:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.166 20:45:59 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:41.166 20:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.166 20:45:59 -- common/autotest_common.sh@10 -- # set +x 00:28:41.166 true 00:28:41.166 20:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.166 20:45:59 -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:44.459 20:46:02 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:44.459 20:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.459 20:46:02 -- common/autotest_common.sh@10 -- # set +x 00:28:44.459 true 00:28:44.459 20:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.459 20:46:02 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:44.459 20:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.459 20:46:02 -- common/autotest_common.sh@10 -- # set +x 00:28:44.459 true 00:28:44.459 20:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.459 20:46:02 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:44.459 20:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.459 20:46:02 -- common/autotest_common.sh@10 -- # set +x 00:28:44.459 true 00:28:44.459 20:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.459 20:46:02 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:44.459 20:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.459 20:46:02 -- common/autotest_common.sh@10 -- # set +x 00:28:44.459 true 00:28:44.459 20:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.459 20:46:02 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:44.459 20:46:02 -- target/initiator_timeout.sh@54 -- # wait 3678417 00:29:40.679 00:29:40.679 job0: (groupid=0, jobs=1): err= 0: pid=3678798: Fri Apr 26 20:46:56 2024 00:29:40.679 read: IOPS=273, BW=1095KiB/s (1121kB/s)(64.2MiB/60014msec) 00:29:40.679 slat (usec): min=3, max=10391, avg=13.75, stdev=81.77 00:29:40.679 clat (usec): min=263, max=41936k, avg=3341.01, stdev=327235.80 00:29:40.679 lat (usec): min=269, max=41936k, avg=3354.76, stdev=327236.00 00:29:40.679 clat percentiles (usec): 00:29:40.679 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:29:40.679 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 379], 00:29:40.679 | 70.00th=[ 408], 80.00th=[ 437], 90.00th=[ 469], 95.00th=[ 502], 00:29:40.679 | 99.00th=[ 1237], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:29:40.679 | 99.99th=[43254] 00:29:40.679 write: IOPS=281, BW=1126KiB/s (1153kB/s)(66.0MiB/60014msec); 0 zone resets 00:29:40.679 slat (usec): min=4, max=33477, avg=16.49, stdev=257.83 00:29:40.679 clat (usec): min=185, max=1535, avg=267.18, stdev=50.82 00:29:40.679 lat (usec): min=194, max=34202, avg=283.67, stdev=267.46 00:29:40.679 clat percentiles (usec): 00:29:40.679 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:29:40.679 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 260], 00:29:40.679 | 70.00th=[ 277], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 359], 00:29:40.679 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[ 519], 99.95th=[ 578], 00:29:40.679 | 99.99th=[ 1352] 00:29:40.679 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=5632.00, stdev=1298.45, samples=24 00:29:40.679 iops : min= 1024, max= 2048, avg=1408.00, stdev=324.61, samples=24 00:29:40.679 lat (usec) : 250=27.12%, 500=70.24%, 750=2.09%, 1000=0.04% 00:29:40.679 lat (msec) : 2=0.02%, 50=0.49%, >=2000=0.01% 00:29:40.679 cpu : usr=0.51%, sys=1.06%, ctx=33327, majf=0, minf=1 00:29:40.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.679 issued rwts: total=16425,16896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:40.679 00:29:40.679 Run status group 0 (all jobs): 00:29:40.679 READ: bw=1095KiB/s (1121kB/s), 1095KiB/s-1095KiB/s (1121kB/s-1121kB/s), io=64.2MiB (67.3MB), run=60014-60014msec 00:29:40.679 WRITE: bw=1126KiB/s (1153kB/s), 1126KiB/s-1126KiB/s (1153kB/s-1153kB/s), io=66.0MiB (69.2MB), run=60014-60014msec 00:29:40.679 00:29:40.679 Disk stats (read/write): 00:29:40.679 nvme0n1: ios=16528/16896, merge=0/0, ticks=13882/4136, in_queue=18018, util=100.00% 00:29:40.679 20:46:56 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:40.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:40.679 20:46:57 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:40.679 20:46:57 -- common/autotest_common.sh@1198 -- # local i=0 00:29:40.679 20:46:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:40.679 20:46:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:29:40.679 20:46:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:29:40.679 20:46:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:40.679 20:46:57 -- common/autotest_common.sh@1210 -- # return 0 00:29:40.679 20:46:57 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:40.679 20:46:57 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:40.679 nvmf hotplug test: fio successful as expected 00:29:40.679 20:46:57 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.679 20:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.679 20:46:57 -- common/autotest_common.sh@10 -- # set +x 00:29:40.679 20:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.679 20:46:57 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:40.679 20:46:57 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:40.679 20:46:57 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:40.679 20:46:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:40.679 20:46:57 -- nvmf/common.sh@116 -- # sync 00:29:40.679 20:46:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:40.679 20:46:57 -- nvmf/common.sh@119 -- # set +e 00:29:40.679 20:46:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:40.679 20:46:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:40.679 rmmod nvme_tcp 00:29:40.679 rmmod nvme_fabrics 00:29:40.679 rmmod nvme_keyring 00:29:40.679 20:46:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:40.679 20:46:57 -- nvmf/common.sh@123 -- # set -e 00:29:40.679 20:46:57 -- nvmf/common.sh@124 -- # return 0 00:29:40.679 20:46:57 -- nvmf/common.sh@477 -- # '[' -n 3677741 ']' 00:29:40.679 20:46:57 -- nvmf/common.sh@478 -- # killprocess 3677741 00:29:40.679 20:46:57 -- common/autotest_common.sh@926 -- # '[' -z 3677741 ']' 00:29:40.679 20:46:57 -- common/autotest_common.sh@930 -- # kill -0 3677741 00:29:40.679 20:46:57 -- common/autotest_common.sh@931 -- # uname 00:29:40.679 20:46:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:40.679 20:46:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3677741 00:29:40.679 20:46:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:40.679 20:46:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:40.679 20:46:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3677741' 00:29:40.679 killing process with pid 3677741 00:29:40.679 20:46:57 -- common/autotest_common.sh@945 -- # kill 3677741 00:29:40.679 20:46:57 -- common/autotest_common.sh@950 -- # wait 3677741 00:29:40.679 20:46:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:40.679 20:46:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:40.679 20:46:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:40.679 20:46:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.679 20:46:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:40.679 20:46:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.679 20:46:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.679 20:46:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.615 20:46:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:41.615 00:29:41.615 real 1m13.446s 00:29:41.615 user 4m39.223s 00:29:41.615 sys 0m6.112s 00:29:41.615 20:46:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.615 20:46:59 -- common/autotest_common.sh@10 -- # set +x 00:29:41.615 ************************************ 00:29:41.615 END TEST nvmf_initiator_timeout 00:29:41.615 ************************************ 00:29:41.615 20:46:59 -- nvmf/nvmf.sh@69 -- # [[ phy-fallback == phy ]] 00:29:41.615 20:46:59 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:29:41.615 20:46:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:41.615 20:46:59 -- common/autotest_common.sh@10 -- # set +x 00:29:41.615 20:46:59 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:29:41.615 20:46:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:41.615 20:46:59 -- common/autotest_common.sh@10 -- # set +x 00:29:41.615 20:46:59 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:29:41.615 20:46:59 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:41.615 20:46:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:41.615 20:46:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:41.615 20:46:59 -- common/autotest_common.sh@10 -- # set +x 00:29:41.615 ************************************ 00:29:41.615 START TEST nvmf_multicontroller 00:29:41.615 ************************************ 00:29:41.615 20:46:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:41.615 * Looking for test storage... 00:29:41.615 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:41.615 20:46:59 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.615 20:46:59 -- nvmf/common.sh@7 -- # uname -s 00:29:41.615 20:46:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.615 20:46:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.615 20:46:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.615 20:46:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.615 20:46:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.615 20:46:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.615 20:46:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.615 20:46:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.615 20:46:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.615 20:46:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.875 20:46:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:29:41.875 20:46:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:29:41.875 20:46:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.875 20:46:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.875 20:46:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:41.875 20:46:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:41.875 20:46:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.875 20:46:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.875 20:46:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.875 20:46:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.875 20:46:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.875 20:46:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.875 20:46:59 -- paths/export.sh@5 -- # export PATH 00:29:41.875 20:46:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.875 20:46:59 -- nvmf/common.sh@46 -- # : 0 00:29:41.875 20:46:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:41.875 20:46:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:41.875 20:46:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:41.875 20:46:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.875 20:46:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.875 20:46:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:41.875 20:46:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:41.875 20:46:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:41.875 20:46:59 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:41.875 20:46:59 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:41.875 20:46:59 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:41.875 20:46:59 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:41.875 20:46:59 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:41.875 20:46:59 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:41.875 20:46:59 -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:41.875 20:46:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:41.875 20:46:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.875 20:46:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:41.875 20:46:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:41.875 20:46:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:41.875 20:46:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.875 20:46:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.875 20:46:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.875 20:46:59 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:41.875 20:46:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:41.875 20:46:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:41.875 20:46:59 -- common/autotest_common.sh@10 -- # set +x 00:29:47.149 20:47:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:47.149 20:47:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:47.149 20:47:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:47.149 20:47:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:47.149 20:47:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:47.149 20:47:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:47.149 20:47:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:47.149 20:47:05 -- nvmf/common.sh@294 -- # net_devs=() 00:29:47.149 20:47:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:47.149 20:47:05 -- nvmf/common.sh@295 -- # e810=() 00:29:47.149 20:47:05 -- nvmf/common.sh@295 -- # local -ga e810 00:29:47.149 20:47:05 -- nvmf/common.sh@296 -- # x722=() 00:29:47.149 20:47:05 -- nvmf/common.sh@296 -- # local -ga x722 00:29:47.149 20:47:05 -- nvmf/common.sh@297 -- # mlx=() 00:29:47.149 20:47:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:47.149 20:47:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.149 20:47:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:47.149 20:47:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:47.149 20:47:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:47.149 20:47:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:47.149 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:47.149 20:47:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:47.149 20:47:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:47.149 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:47.149 20:47:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:47.149 20:47:05 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:47.149 20:47:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.149 20:47:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:47.149 20:47:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.149 20:47:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:47.149 Found net devices under 0000:27:00.0: cvl_0_0 00:29:47.149 20:47:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.149 20:47:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:47.149 20:47:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.149 20:47:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:47.149 20:47:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.149 20:47:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:47.149 Found net devices under 0000:27:00.1: cvl_0_1 00:29:47.149 20:47:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.149 20:47:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:47.149 20:47:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:47.149 20:47:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:47.149 20:47:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.149 20:47:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.149 20:47:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.149 20:47:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:47.149 20:47:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.149 20:47:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.149 20:47:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:47.149 20:47:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.149 20:47:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.149 20:47:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:47.149 20:47:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:47.149 20:47:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.149 20:47:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.149 20:47:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.149 20:47:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.149 20:47:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:47.149 20:47:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.149 20:47:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.149 20:47:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.149 20:47:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:47.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:29:47.149 00:29:47.149 --- 10.0.0.2 ping statistics --- 00:29:47.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.149 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:29:47.149 20:47:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:29:47.149 00:29:47.149 --- 10.0.0.1 ping statistics --- 00:29:47.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.149 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:47.149 20:47:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.149 20:47:05 -- nvmf/common.sh@410 -- # return 0 00:29:47.149 20:47:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:47.149 20:47:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.149 20:47:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:47.149 20:47:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.149 20:47:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:47.149 20:47:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:47.149 20:47:05 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:47.149 20:47:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:47.149 20:47:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:47.149 20:47:05 -- common/autotest_common.sh@10 -- # set +x 00:29:47.149 20:47:05 -- nvmf/common.sh@469 -- # nvmfpid=3694350 00:29:47.149 20:47:05 -- nvmf/common.sh@470 -- # waitforlisten 3694350 00:29:47.149 20:47:05 -- common/autotest_common.sh@819 -- # '[' -z 3694350 ']' 00:29:47.149 20:47:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.149 20:47:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:47.149 20:47:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.149 20:47:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:47.149 20:47:05 -- common/autotest_common.sh@10 -- # set +x 00:29:47.149 20:47:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:47.149 [2024-04-26 20:47:05.395279] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:47.149 [2024-04-26 20:47:05.395401] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.149 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.408 [2024-04-26 20:47:05.518720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.408 [2024-04-26 20:47:05.620959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:47.409 [2024-04-26 20:47:05.621189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.409 [2024-04-26 20:47:05.621205] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.409 [2024-04-26 20:47:05.621218] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.409 [2024-04-26 20:47:05.621298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.409 [2024-04-26 20:47:05.621336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.409 [2024-04-26 20:47:05.621339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.000 20:47:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:48.000 20:47:06 -- common/autotest_common.sh@852 -- # return 0 00:29:48.000 20:47:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:48.000 20:47:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:48.000 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.000 20:47:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.000 20:47:06 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.000 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.000 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.000 [2024-04-26 20:47:06.138724] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.000 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.000 20:47:06 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:48.000 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.000 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.000 Malloc0 00:29:48.000 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.000 20:47:06 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.000 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.000 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.000 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.000 20:47:06 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:48.000 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.000 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.000 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.000 20:47:06 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.000 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.000 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.000 [2024-04-26 20:47:06.228485] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.000 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.000 20:47:06 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:48.000 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.001 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 [2024-04-26 20:47:06.236383] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:48.001 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.001 20:47:06 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:48.001 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.001 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 Malloc1 00:29:48.001 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.001 20:47:06 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:48.001 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.001 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.001 20:47:06 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:48.001 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.001 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.001 20:47:06 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:48.001 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.001 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.001 20:47:06 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:48.001 20:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.001 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 20:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.001 20:47:06 -- host/multicontroller.sh@44 -- # bdevperf_pid=3694616 00:29:48.001 20:47:06 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.001 20:47:06 -- host/multicontroller.sh@47 -- # waitforlisten 3694616 /var/tmp/bdevperf.sock 00:29:48.001 20:47:06 -- common/autotest_common.sh@819 -- # '[' -z 3694616 ']' 00:29:48.001 20:47:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.001 20:47:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:48.001 20:47:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.001 20:47:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:48.001 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 20:47:06 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:48.937 20:47:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:48.937 20:47:07 -- common/autotest_common.sh@852 -- # return 0 00:29:48.937 20:47:07 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:48.937 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.937 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.198 NVMe0n1 00:29:49.198 20:47:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.198 20:47:07 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:49.198 20:47:07 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:49.198 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.198 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.198 20:47:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.198 1 00:29:49.198 20:47:07 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:49.198 20:47:07 -- common/autotest_common.sh@640 -- # local es=0 00:29:49.198 20:47:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:49.198 20:47:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:49.198 20:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:49.198 20:47:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:49.198 20:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:49.198 20:47:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:49.198 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.198 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.198 request: 00:29:49.198 { 00:29:49.198 "name": "NVMe0", 00:29:49.198 "trtype": "tcp", 00:29:49.198 "traddr": "10.0.0.2", 00:29:49.198 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:49.198 "hostaddr": "10.0.0.2", 00:29:49.198 "hostsvcid": "60000", 00:29:49.198 "adrfam": "ipv4", 00:29:49.198 "trsvcid": "4420", 00:29:49.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.198 "method": "bdev_nvme_attach_controller", 00:29:49.198 "req_id": 1 00:29:49.198 } 00:29:49.198 Got JSON-RPC error response 00:29:49.198 response: 00:29:49.198 { 00:29:49.198 "code": -114, 00:29:49.198 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:49.198 } 00:29:49.198 20:47:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:49.198 20:47:07 -- common/autotest_common.sh@643 -- # es=1 00:29:49.198 20:47:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:49.198 20:47:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:49.198 20:47:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:49.199 20:47:07 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:49.199 20:47:07 -- common/autotest_common.sh@640 -- # local es=0 00:29:49.199 20:47:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:49.199 20:47:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:49.199 20:47:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:49.199 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.199 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.199 request: 00:29:49.199 { 00:29:49.199 "name": "NVMe0", 00:29:49.199 "trtype": "tcp", 00:29:49.199 "traddr": "10.0.0.2", 00:29:49.199 "hostaddr": "10.0.0.2", 00:29:49.199 "hostsvcid": "60000", 00:29:49.199 "adrfam": "ipv4", 00:29:49.199 "trsvcid": "4420", 00:29:49.199 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:49.199 "method": "bdev_nvme_attach_controller", 00:29:49.199 "req_id": 1 00:29:49.199 } 00:29:49.199 Got JSON-RPC error response 00:29:49.199 response: 00:29:49.199 { 00:29:49.199 "code": -114, 00:29:49.199 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:49.199 } 00:29:49.199 20:47:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:49.199 20:47:07 -- common/autotest_common.sh@643 -- # es=1 00:29:49.199 20:47:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:49.199 20:47:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:49.199 20:47:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:49.199 20:47:07 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:49.199 20:47:07 -- common/autotest_common.sh@640 -- # local es=0 00:29:49.199 20:47:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:49.199 20:47:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:49.199 20:47:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:49.199 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.199 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.199 request: 00:29:49.199 { 00:29:49.199 "name": "NVMe0", 00:29:49.199 "trtype": "tcp", 00:29:49.199 "traddr": "10.0.0.2", 00:29:49.199 "hostaddr": "10.0.0.2", 00:29:49.199 "hostsvcid": "60000", 00:29:49.199 "adrfam": "ipv4", 00:29:49.199 "trsvcid": "4420", 00:29:49.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.199 "multipath": "disable", 00:29:49.199 "method": "bdev_nvme_attach_controller", 00:29:49.199 "req_id": 1 00:29:49.199 } 00:29:49.199 Got JSON-RPC error response 00:29:49.199 response: 00:29:49.199 { 00:29:49.199 "code": -114, 00:29:49.199 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:49.199 } 00:29:49.199 20:47:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:49.199 20:47:07 -- common/autotest_common.sh@643 -- # es=1 00:29:49.199 20:47:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:49.199 20:47:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:49.199 20:47:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:49.199 20:47:07 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:49.199 20:47:07 -- common/autotest_common.sh@640 -- # local es=0 00:29:49.199 20:47:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:49.199 20:47:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:49.199 20:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:49.199 20:47:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:49.199 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.199 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.199 request: 00:29:49.199 { 00:29:49.199 "name": "NVMe0", 00:29:49.199 "trtype": "tcp", 00:29:49.199 "traddr": "10.0.0.2", 00:29:49.199 "hostaddr": "10.0.0.2", 00:29:49.199 "hostsvcid": "60000", 00:29:49.199 "adrfam": "ipv4", 00:29:49.199 "trsvcid": "4420", 00:29:49.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.199 "multipath": "failover", 00:29:49.199 "method": "bdev_nvme_attach_controller", 00:29:49.199 "req_id": 1 00:29:49.199 } 00:29:49.199 Got JSON-RPC error response 00:29:49.199 response: 00:29:49.199 { 00:29:49.199 "code": -114, 00:29:49.199 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:49.199 } 00:29:49.199 20:47:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:49.199 20:47:07 -- common/autotest_common.sh@643 -- # es=1 00:29:49.199 20:47:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:49.199 20:47:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:49.199 20:47:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:49.199 20:47:07 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:49.199 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.199 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.457 00:29:49.457 20:47:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.457 20:47:07 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:49.457 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.457 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.457 20:47:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.457 20:47:07 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:49.457 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.457 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.715 00:29:49.715 20:47:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.715 20:47:07 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:49.715 20:47:07 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:49.715 20:47:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.715 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:29:49.715 20:47:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.715 20:47:07 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:49.715 20:47:07 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:50.650 0 00:29:50.650 20:47:08 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:50.650 20:47:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:50.650 20:47:08 -- common/autotest_common.sh@10 -- # set +x 00:29:50.650 20:47:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:50.650 20:47:08 -- host/multicontroller.sh@100 -- # killprocess 3694616 00:29:50.650 20:47:08 -- common/autotest_common.sh@926 -- # '[' -z 3694616 ']' 00:29:50.650 20:47:08 -- common/autotest_common.sh@930 -- # kill -0 3694616 00:29:50.650 20:47:08 -- common/autotest_common.sh@931 -- # uname 00:29:50.650 20:47:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:50.910 20:47:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3694616 00:29:50.910 20:47:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:50.910 20:47:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:50.910 20:47:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3694616' 00:29:50.910 killing process with pid 3694616 00:29:50.910 20:47:09 -- common/autotest_common.sh@945 -- # kill 3694616 00:29:50.910 20:47:09 -- common/autotest_common.sh@950 -- # wait 3694616 00:29:51.171 20:47:09 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.171 20:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.171 20:47:09 -- common/autotest_common.sh@10 -- # set +x 00:29:51.171 20:47:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.171 20:47:09 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:51.171 20:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.171 20:47:09 -- common/autotest_common.sh@10 -- # set +x 00:29:51.171 20:47:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.171 20:47:09 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:51.171 20:47:09 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:51.171 20:47:09 -- common/autotest_common.sh@1597 -- # read -r file 00:29:51.171 20:47:09 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:51.171 20:47:09 -- common/autotest_common.sh@1596 -- # sort -u 00:29:51.171 20:47:09 -- common/autotest_common.sh@1598 -- # cat 00:29:51.171 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:51.171 [2024-04-26 20:47:06.377770] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:51.171 [2024-04-26 20:47:06.377888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3694616 ] 00:29:51.171 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.171 [2024-04-26 20:47:06.489848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.171 [2024-04-26 20:47:06.578786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.171 [2024-04-26 20:47:07.858785] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 68f136e3-7e09-4c08-b0a8-70187aecf88a already exists 00:29:51.171 [2024-04-26 20:47:07.858828] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:68f136e3-7e09-4c08-b0a8-70187aecf88a alias for bdev NVMe1n1 00:29:51.171 [2024-04-26 20:47:07.858843] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:51.171 Running I/O for 1 seconds... 00:29:51.171 00:29:51.171 Latency(us) 00:29:51.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.171 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:51.171 NVMe0n1 : 1.00 25528.80 99.72 0.00 0.00 4998.80 4828.97 16280.52 00:29:51.171 =================================================================================================================== 00:29:51.171 Total : 25528.80 99.72 0.00 0.00 4998.80 4828.97 16280.52 00:29:51.171 Received shutdown signal, test time was about 1.000000 seconds 00:29:51.171 00:29:51.171 Latency(us) 00:29:51.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.171 =================================================================================================================== 00:29:51.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:51.171 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:51.171 20:47:09 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:51.171 20:47:09 -- common/autotest_common.sh@1597 -- # read -r file 00:29:51.171 20:47:09 -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:51.171 20:47:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:51.171 20:47:09 -- nvmf/common.sh@116 -- # sync 00:29:51.171 20:47:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:51.171 20:47:09 -- nvmf/common.sh@119 -- # set +e 00:29:51.171 20:47:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:51.171 20:47:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:51.171 rmmod nvme_tcp 00:29:51.171 rmmod nvme_fabrics 00:29:51.171 rmmod nvme_keyring 00:29:51.430 20:47:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:51.431 20:47:09 -- nvmf/common.sh@123 -- # set -e 00:29:51.431 20:47:09 -- nvmf/common.sh@124 -- # return 0 00:29:51.431 20:47:09 -- nvmf/common.sh@477 -- # '[' -n 3694350 ']' 00:29:51.431 20:47:09 -- nvmf/common.sh@478 -- # killprocess 3694350 00:29:51.431 20:47:09 -- common/autotest_common.sh@926 -- # '[' -z 3694350 ']' 00:29:51.431 20:47:09 -- common/autotest_common.sh@930 -- # kill -0 3694350 00:29:51.431 20:47:09 -- common/autotest_common.sh@931 -- # uname 00:29:51.431 20:47:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:51.431 20:47:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3694350 00:29:51.431 20:47:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:51.431 20:47:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:51.431 20:47:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3694350' 00:29:51.431 killing process with pid 3694350 00:29:51.431 20:47:09 -- common/autotest_common.sh@945 -- # kill 3694350 00:29:51.431 20:47:09 -- common/autotest_common.sh@950 -- # wait 3694350 00:29:51.998 20:47:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:51.998 20:47:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:51.998 20:47:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:51.998 20:47:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:51.998 20:47:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:51.998 20:47:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.998 20:47:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:51.998 20:47:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.905 20:47:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:53.905 00:29:53.905 real 0m12.337s 00:29:53.905 user 0m18.065s 00:29:53.905 sys 0m4.735s 00:29:53.905 20:47:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.905 20:47:12 -- common/autotest_common.sh@10 -- # set +x 00:29:53.905 ************************************ 00:29:53.905 END TEST nvmf_multicontroller 00:29:53.905 ************************************ 00:29:54.163 20:47:12 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:54.164 20:47:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:54.164 20:47:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:54.164 20:47:12 -- common/autotest_common.sh@10 -- # set +x 00:29:54.164 ************************************ 00:29:54.164 START TEST nvmf_aer 00:29:54.164 ************************************ 00:29:54.164 20:47:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:54.164 * Looking for test storage... 00:29:54.164 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:54.164 20:47:12 -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.164 20:47:12 -- nvmf/common.sh@7 -- # uname -s 00:29:54.164 20:47:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.164 20:47:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.164 20:47:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.164 20:47:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.164 20:47:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.164 20:47:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.164 20:47:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.164 20:47:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.164 20:47:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.164 20:47:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.164 20:47:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:29:54.164 20:47:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:29:54.164 20:47:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.164 20:47:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.164 20:47:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:54.164 20:47:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:54.164 20:47:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.164 20:47:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.164 20:47:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.164 20:47:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.164 20:47:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.164 20:47:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.164 20:47:12 -- paths/export.sh@5 -- # export PATH 00:29:54.164 20:47:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.164 20:47:12 -- nvmf/common.sh@46 -- # : 0 00:29:54.164 20:47:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:54.164 20:47:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:54.164 20:47:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:54.164 20:47:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.164 20:47:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.164 20:47:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:54.164 20:47:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:54.164 20:47:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:54.164 20:47:12 -- host/aer.sh@11 -- # nvmftestinit 00:29:54.164 20:47:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:54.164 20:47:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.164 20:47:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:54.164 20:47:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:54.164 20:47:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:54.164 20:47:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.164 20:47:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.164 20:47:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.164 20:47:12 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:54.164 20:47:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:54.164 20:47:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:54.164 20:47:12 -- common/autotest_common.sh@10 -- # set +x 00:29:59.517 20:47:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:59.517 20:47:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:59.517 20:47:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:59.517 20:47:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:59.517 20:47:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:59.517 20:47:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:59.517 20:47:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:59.517 20:47:17 -- nvmf/common.sh@294 -- # net_devs=() 00:29:59.517 20:47:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:59.517 20:47:17 -- nvmf/common.sh@295 -- # e810=() 00:29:59.517 20:47:17 -- nvmf/common.sh@295 -- # local -ga e810 00:29:59.517 20:47:17 -- nvmf/common.sh@296 -- # x722=() 00:29:59.517 20:47:17 -- nvmf/common.sh@296 -- # local -ga x722 00:29:59.517 20:47:17 -- nvmf/common.sh@297 -- # mlx=() 00:29:59.517 20:47:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:59.517 20:47:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.517 20:47:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:59.517 20:47:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:59.517 20:47:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:59.517 20:47:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:59.517 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:59.517 20:47:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:59.517 20:47:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:59.517 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:59.517 20:47:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:59.517 20:47:17 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:59.517 20:47:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.517 20:47:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:59.517 20:47:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.517 20:47:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:59.517 Found net devices under 0000:27:00.0: cvl_0_0 00:29:59.517 20:47:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.517 20:47:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:59.517 20:47:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.517 20:47:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:59.517 20:47:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.517 20:47:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:59.517 Found net devices under 0000:27:00.1: cvl_0_1 00:29:59.517 20:47:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.517 20:47:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:59.517 20:47:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:59.517 20:47:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:59.517 20:47:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.517 20:47:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.517 20:47:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.517 20:47:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:59.517 20:47:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.517 20:47:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.517 20:47:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:59.517 20:47:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.517 20:47:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.517 20:47:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:59.517 20:47:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:59.517 20:47:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.517 20:47:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.517 20:47:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.517 20:47:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.517 20:47:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:59.517 20:47:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.517 20:47:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.517 20:47:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.517 20:47:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:59.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:29:59.517 00:29:59.517 --- 10.0.0.2 ping statistics --- 00:29:59.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.517 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:29:59.517 20:47:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:29:59.517 00:29:59.517 --- 10.0.0.1 ping statistics --- 00:29:59.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.517 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:59.517 20:47:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.517 20:47:17 -- nvmf/common.sh@410 -- # return 0 00:29:59.517 20:47:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:59.517 20:47:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.517 20:47:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:59.517 20:47:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.517 20:47:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:59.517 20:47:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:59.517 20:47:17 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:59.517 20:47:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:59.517 20:47:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:59.517 20:47:17 -- common/autotest_common.sh@10 -- # set +x 00:29:59.517 20:47:17 -- nvmf/common.sh@469 -- # nvmfpid=3699164 00:29:59.517 20:47:17 -- nvmf/common.sh@470 -- # waitforlisten 3699164 00:29:59.517 20:47:17 -- common/autotest_common.sh@819 -- # '[' -z 3699164 ']' 00:29:59.517 20:47:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.517 20:47:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:59.517 20:47:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.517 20:47:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:59.517 20:47:17 -- common/autotest_common.sh@10 -- # set +x 00:29:59.517 20:47:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:59.517 [2024-04-26 20:47:17.695420] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:59.517 [2024-04-26 20:47:17.695528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.517 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.517 [2024-04-26 20:47:17.818339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.777 [2024-04-26 20:47:17.924209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:59.777 [2024-04-26 20:47:17.924416] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.777 [2024-04-26 20:47:17.924433] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.778 [2024-04-26 20:47:17.924443] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.778 [2024-04-26 20:47:17.924526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.778 [2024-04-26 20:47:17.924551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.778 [2024-04-26 20:47:17.924571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.778 [2024-04-26 20:47:17.924583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.345 20:47:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:00.345 20:47:18 -- common/autotest_common.sh@852 -- # return 0 00:30:00.345 20:47:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:00.345 20:47:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:00.345 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.345 20:47:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.345 20:47:18 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.345 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.345 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.345 [2024-04-26 20:47:18.444342] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.345 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.345 20:47:18 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:00.345 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.345 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.345 Malloc0 00:30:00.345 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.345 20:47:18 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:00.345 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.345 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.345 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.345 20:47:18 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.345 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.345 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.345 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.345 20:47:18 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.345 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.345 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.345 [2024-04-26 20:47:18.508714] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.345 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.345 20:47:18 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:00.345 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.345 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.345 [2024-04-26 20:47:18.516447] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:00.345 [ 00:30:00.345 { 00:30:00.345 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:00.345 "subtype": "Discovery", 00:30:00.345 "listen_addresses": [], 00:30:00.345 "allow_any_host": true, 00:30:00.345 "hosts": [] 00:30:00.345 }, 00:30:00.345 { 00:30:00.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.345 "subtype": "NVMe", 00:30:00.345 "listen_addresses": [ 00:30:00.345 { 00:30:00.345 "transport": "TCP", 00:30:00.345 "trtype": "TCP", 00:30:00.345 "adrfam": "IPv4", 00:30:00.345 "traddr": "10.0.0.2", 00:30:00.345 "trsvcid": "4420" 00:30:00.345 } 00:30:00.345 ], 00:30:00.345 "allow_any_host": true, 00:30:00.345 "hosts": [], 00:30:00.345 "serial_number": "SPDK00000000000001", 00:30:00.345 "model_number": "SPDK bdev Controller", 00:30:00.345 "max_namespaces": 2, 00:30:00.345 "min_cntlid": 1, 00:30:00.345 "max_cntlid": 65519, 00:30:00.345 "namespaces": [ 00:30:00.345 { 00:30:00.345 "nsid": 1, 00:30:00.345 "bdev_name": "Malloc0", 00:30:00.345 "name": "Malloc0", 00:30:00.345 "nguid": "F942A7928B744971B7256AF23B27E93E", 00:30:00.345 "uuid": "f942a792-8b74-4971-b725-6af23b27e93e" 00:30:00.345 } 00:30:00.345 ] 00:30:00.345 } 00:30:00.345 ] 00:30:00.345 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.345 20:47:18 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:00.345 20:47:18 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:00.345 20:47:18 -- host/aer.sh@33 -- # aerpid=3699465 00:30:00.345 20:47:18 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:00.345 20:47:18 -- common/autotest_common.sh@1244 -- # local i=0 00:30:00.345 20:47:18 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.345 20:47:18 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:30:00.345 20:47:18 -- common/autotest_common.sh@1247 -- # i=1 00:30:00.345 20:47:18 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:30:00.345 20:47:18 -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:00.345 20:47:18 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.345 20:47:18 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:30:00.345 20:47:18 -- common/autotest_common.sh@1247 -- # i=2 00:30:00.345 20:47:18 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:30:00.345 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.604 20:47:18 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.604 20:47:18 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:30:00.604 20:47:18 -- common/autotest_common.sh@1247 -- # i=3 00:30:00.604 20:47:18 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:30:00.604 20:47:18 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.604 20:47:18 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:00.604 20:47:18 -- common/autotest_common.sh@1255 -- # return 0 00:30:00.604 20:47:18 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:00.604 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.604 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.604 Malloc1 00:30:00.604 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.604 20:47:18 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:00.604 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.604 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.604 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.604 20:47:18 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:00.604 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.604 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.604 [ 00:30:00.604 { 00:30:00.604 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:00.604 "subtype": "Discovery", 00:30:00.604 "listen_addresses": [], 00:30:00.604 "allow_any_host": true, 00:30:00.604 "hosts": [] 00:30:00.604 }, 00:30:00.604 { 00:30:00.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.604 "subtype": "NVMe", 00:30:00.604 "listen_addresses": [ 00:30:00.604 { 00:30:00.604 "transport": "TCP", 00:30:00.604 "trtype": "TCP", 00:30:00.604 "adrfam": "IPv4", 00:30:00.604 "traddr": "10.0.0.2", 00:30:00.604 "trsvcid": "4420" 00:30:00.604 } 00:30:00.604 ], 00:30:00.604 "allow_any_host": true, 00:30:00.604 "hosts": [], 00:30:00.604 "serial_number": "SPDK00000000000001", 00:30:00.604 "model_number": "SPDK bdev Controller", 00:30:00.604 "max_namespaces": 2, 00:30:00.604 "min_cntlid": 1, 00:30:00.604 "max_cntlid": 65519, 00:30:00.604 "namespaces": [ 00:30:00.604 { 00:30:00.604 "nsid": 1, 00:30:00.604 "bdev_name": "Malloc0", 00:30:00.604 "name": "Malloc0", 00:30:00.604 "nguid": "F942A7928B744971B7256AF23B27E93E", 00:30:00.604 "uuid": "f942a792-8b74-4971-b725-6af23b27e93e" 00:30:00.604 }, 00:30:00.604 { 00:30:00.604 "nsid": 2, 00:30:00.604 "bdev_name": "Malloc1", 00:30:00.604 "name": "Malloc1", 00:30:00.604 "nguid": "FB9C5BF6CF7A42FFB79CC8788B9584AA", 00:30:00.604 "uuid": "fb9c5bf6-cf7a-42ff-b79c-c8788b9584aa" 00:30:00.604 } 00:30:00.604 ] 00:30:00.604 } 00:30:00.604 ] 00:30:00.604 20:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.604 20:47:18 -- host/aer.sh@43 -- # wait 3699465 00:30:00.863 Asynchronous Event Request test 00:30:00.863 Attaching to 10.0.0.2 00:30:00.863 Attached to 10.0.0.2 00:30:00.863 Registering asynchronous event callbacks... 00:30:00.863 Starting namespace attribute notice tests for all controllers... 00:30:00.863 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:00.863 aer_cb - Changed Namespace 00:30:00.863 Cleaning up... 00:30:00.863 20:47:18 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:00.863 20:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.863 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.863 20:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.863 20:47:19 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:00.863 20:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.863 20:47:19 -- common/autotest_common.sh@10 -- # set +x 00:30:00.863 20:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.863 20:47:19 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:00.863 20:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.863 20:47:19 -- common/autotest_common.sh@10 -- # set +x 00:30:00.863 20:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.863 20:47:19 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:00.863 20:47:19 -- host/aer.sh@51 -- # nvmftestfini 00:30:00.863 20:47:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:00.863 20:47:19 -- nvmf/common.sh@116 -- # sync 00:30:00.863 20:47:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:00.863 20:47:19 -- nvmf/common.sh@119 -- # set +e 00:30:00.863 20:47:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:00.863 20:47:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:00.863 rmmod nvme_tcp 00:30:00.863 rmmod nvme_fabrics 00:30:00.863 rmmod nvme_keyring 00:30:00.863 20:47:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:00.863 20:47:19 -- nvmf/common.sh@123 -- # set -e 00:30:00.863 20:47:19 -- nvmf/common.sh@124 -- # return 0 00:30:00.863 20:47:19 -- nvmf/common.sh@477 -- # '[' -n 3699164 ']' 00:30:00.863 20:47:19 -- nvmf/common.sh@478 -- # killprocess 3699164 00:30:00.863 20:47:19 -- common/autotest_common.sh@926 -- # '[' -z 3699164 ']' 00:30:00.863 20:47:19 -- common/autotest_common.sh@930 -- # kill -0 3699164 00:30:00.863 20:47:19 -- common/autotest_common.sh@931 -- # uname 00:30:00.863 20:47:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:00.863 20:47:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3699164 00:30:00.863 20:47:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:00.863 20:47:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:00.863 20:47:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3699164' 00:30:00.863 killing process with pid 3699164 00:30:00.863 20:47:19 -- common/autotest_common.sh@945 -- # kill 3699164 00:30:00.863 [2024-04-26 20:47:19.179539] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:00.863 20:47:19 -- common/autotest_common.sh@950 -- # wait 3699164 00:30:01.434 20:47:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:01.434 20:47:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:01.434 20:47:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:01.434 20:47:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:01.434 20:47:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:01.434 20:47:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.434 20:47:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.434 20:47:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.975 20:47:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:03.975 00:30:03.975 real 0m9.457s 00:30:03.975 user 0m7.827s 00:30:03.975 sys 0m4.489s 00:30:03.975 20:47:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.975 20:47:21 -- common/autotest_common.sh@10 -- # set +x 00:30:03.975 ************************************ 00:30:03.975 END TEST nvmf_aer 00:30:03.975 ************************************ 00:30:03.975 20:47:21 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:03.975 20:47:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:03.975 20:47:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:03.975 20:47:21 -- common/autotest_common.sh@10 -- # set +x 00:30:03.975 ************************************ 00:30:03.975 START TEST nvmf_async_init 00:30:03.975 ************************************ 00:30:03.975 20:47:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:03.975 * Looking for test storage... 00:30:03.975 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:03.975 20:47:21 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.975 20:47:21 -- nvmf/common.sh@7 -- # uname -s 00:30:03.975 20:47:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.975 20:47:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.975 20:47:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.975 20:47:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.975 20:47:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.975 20:47:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.975 20:47:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.975 20:47:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.975 20:47:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.975 20:47:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.975 20:47:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:30:03.975 20:47:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:30:03.975 20:47:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.975 20:47:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.975 20:47:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:03.975 20:47:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:03.975 20:47:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.975 20:47:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.975 20:47:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.975 20:47:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.975 20:47:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.975 20:47:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.975 20:47:21 -- paths/export.sh@5 -- # export PATH 00:30:03.975 20:47:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.975 20:47:21 -- nvmf/common.sh@46 -- # : 0 00:30:03.975 20:47:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:03.975 20:47:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:03.975 20:47:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:03.975 20:47:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.975 20:47:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.975 20:47:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:03.975 20:47:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:03.975 20:47:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:03.975 20:47:21 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:03.975 20:47:21 -- host/async_init.sh@14 -- # null_block_size=512 00:30:03.975 20:47:21 -- host/async_init.sh@15 -- # null_bdev=null0 00:30:03.975 20:47:21 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:03.975 20:47:21 -- host/async_init.sh@20 -- # uuidgen 00:30:03.975 20:47:21 -- host/async_init.sh@20 -- # tr -d - 00:30:03.975 20:47:21 -- host/async_init.sh@20 -- # nguid=be937024bde84c078338a7c90cabfeae 00:30:03.975 20:47:21 -- host/async_init.sh@22 -- # nvmftestinit 00:30:03.975 20:47:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:03.975 20:47:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.975 20:47:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:03.975 20:47:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:03.975 20:47:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:03.975 20:47:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.975 20:47:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.975 20:47:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.975 20:47:21 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:30:03.975 20:47:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:03.975 20:47:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:03.975 20:47:21 -- common/autotest_common.sh@10 -- # set +x 00:30:09.249 20:47:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:09.249 20:47:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:09.249 20:47:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:09.249 20:47:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:09.250 20:47:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:09.250 20:47:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:09.250 20:47:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:09.250 20:47:27 -- nvmf/common.sh@294 -- # net_devs=() 00:30:09.250 20:47:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:09.250 20:47:27 -- nvmf/common.sh@295 -- # e810=() 00:30:09.250 20:47:27 -- nvmf/common.sh@295 -- # local -ga e810 00:30:09.250 20:47:27 -- nvmf/common.sh@296 -- # x722=() 00:30:09.250 20:47:27 -- nvmf/common.sh@296 -- # local -ga x722 00:30:09.250 20:47:27 -- nvmf/common.sh@297 -- # mlx=() 00:30:09.250 20:47:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:09.250 20:47:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.250 20:47:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:09.250 20:47:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:09.250 20:47:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:09.250 20:47:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:09.250 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:09.250 20:47:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:09.250 20:47:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:09.250 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:09.250 20:47:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:09.250 20:47:27 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:09.250 20:47:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.250 20:47:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:09.250 20:47:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.250 20:47:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:09.250 Found net devices under 0000:27:00.0: cvl_0_0 00:30:09.250 20:47:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.250 20:47:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:09.250 20:47:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.250 20:47:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:09.250 20:47:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.250 20:47:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:09.250 Found net devices under 0000:27:00.1: cvl_0_1 00:30:09.250 20:47:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.250 20:47:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:09.250 20:47:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:09.250 20:47:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:09.250 20:47:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.250 20:47:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.250 20:47:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.250 20:47:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:09.250 20:47:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.250 20:47:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.250 20:47:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:09.250 20:47:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.250 20:47:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.250 20:47:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:09.250 20:47:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:09.250 20:47:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.250 20:47:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.250 20:47:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.250 20:47:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.250 20:47:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:09.250 20:47:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.250 20:47:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.250 20:47:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.250 20:47:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:09.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:30:09.250 00:30:09.250 --- 10.0.0.2 ping statistics --- 00:30:09.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.250 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:30:09.250 20:47:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:30:09.250 00:30:09.250 --- 10.0.0.1 ping statistics --- 00:30:09.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.250 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:30:09.250 20:47:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.250 20:47:27 -- nvmf/common.sh@410 -- # return 0 00:30:09.250 20:47:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:09.250 20:47:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.250 20:47:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:09.250 20:47:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.250 20:47:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:09.250 20:47:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:09.250 20:47:27 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:09.250 20:47:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:09.250 20:47:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:09.250 20:47:27 -- common/autotest_common.sh@10 -- # set +x 00:30:09.250 20:47:27 -- nvmf/common.sh@469 -- # nvmfpid=3703660 00:30:09.250 20:47:27 -- nvmf/common.sh@470 -- # waitforlisten 3703660 00:30:09.250 20:47:27 -- common/autotest_common.sh@819 -- # '[' -z 3703660 ']' 00:30:09.250 20:47:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.250 20:47:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:09.250 20:47:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.250 20:47:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:09.250 20:47:27 -- common/autotest_common.sh@10 -- # set +x 00:30:09.250 20:47:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:09.250 [2024-04-26 20:47:27.409277] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:09.250 [2024-04-26 20:47:27.409383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.250 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.250 [2024-04-26 20:47:27.525861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.511 [2024-04-26 20:47:27.621581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:09.511 [2024-04-26 20:47:27.621746] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.511 [2024-04-26 20:47:27.621761] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.511 [2024-04-26 20:47:27.621769] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.511 [2024-04-26 20:47:27.621800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.770 20:47:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:09.770 20:47:28 -- common/autotest_common.sh@852 -- # return 0 00:30:09.770 20:47:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:09.770 20:47:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:09.770 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.031 20:47:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.031 20:47:28 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:10.031 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.031 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.031 [2024-04-26 20:47:28.132289] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.031 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.031 20:47:28 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:10.031 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.031 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.031 null0 00:30:10.031 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.031 20:47:28 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:10.031 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.031 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.031 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.031 20:47:28 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:10.031 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.031 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.031 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.031 20:47:28 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g be937024bde84c078338a7c90cabfeae 00:30:10.031 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.031 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.031 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.031 20:47:28 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:10.031 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.031 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.031 [2024-04-26 20:47:28.172498] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.031 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.032 20:47:28 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:10.032 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.032 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.294 nvme0n1 00:30:10.294 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.294 20:47:28 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:10.294 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.294 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.294 [ 00:30:10.294 { 00:30:10.294 "name": "nvme0n1", 00:30:10.294 "aliases": [ 00:30:10.294 "be937024-bde8-4c07-8338-a7c90cabfeae" 00:30:10.294 ], 00:30:10.294 "product_name": "NVMe disk", 00:30:10.294 "block_size": 512, 00:30:10.294 "num_blocks": 2097152, 00:30:10.294 "uuid": "be937024-bde8-4c07-8338-a7c90cabfeae", 00:30:10.294 "assigned_rate_limits": { 00:30:10.294 "rw_ios_per_sec": 0, 00:30:10.294 "rw_mbytes_per_sec": 0, 00:30:10.294 "r_mbytes_per_sec": 0, 00:30:10.294 "w_mbytes_per_sec": 0 00:30:10.294 }, 00:30:10.294 "claimed": false, 00:30:10.294 "zoned": false, 00:30:10.294 "supported_io_types": { 00:30:10.294 "read": true, 00:30:10.294 "write": true, 00:30:10.294 "unmap": false, 00:30:10.294 "write_zeroes": true, 00:30:10.294 "flush": true, 00:30:10.294 "reset": true, 00:30:10.294 "compare": true, 00:30:10.294 "compare_and_write": true, 00:30:10.294 "abort": true, 00:30:10.294 "nvme_admin": true, 00:30:10.294 "nvme_io": true 00:30:10.294 }, 00:30:10.294 "driver_specific": { 00:30:10.294 "nvme": [ 00:30:10.294 { 00:30:10.294 "trid": { 00:30:10.294 "trtype": "TCP", 00:30:10.294 "adrfam": "IPv4", 00:30:10.294 "traddr": "10.0.0.2", 00:30:10.294 "trsvcid": "4420", 00:30:10.294 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:10.294 }, 00:30:10.294 "ctrlr_data": { 00:30:10.294 "cntlid": 1, 00:30:10.294 "vendor_id": "0x8086", 00:30:10.294 "model_number": "SPDK bdev Controller", 00:30:10.294 "serial_number": "00000000000000000000", 00:30:10.294 "firmware_revision": "24.01.1", 00:30:10.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.294 "oacs": { 00:30:10.294 "security": 0, 00:30:10.294 "format": 0, 00:30:10.294 "firmware": 0, 00:30:10.294 "ns_manage": 0 00:30:10.294 }, 00:30:10.294 "multi_ctrlr": true, 00:30:10.294 "ana_reporting": false 00:30:10.294 }, 00:30:10.294 "vs": { 00:30:10.294 "nvme_version": "1.3" 00:30:10.294 }, 00:30:10.294 "ns_data": { 00:30:10.294 "id": 1, 00:30:10.294 "can_share": true 00:30:10.294 } 00:30:10.294 } 00:30:10.294 ], 00:30:10.294 "mp_policy": "active_passive" 00:30:10.294 } 00:30:10.294 } 00:30:10.294 ] 00:30:10.294 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.294 20:47:28 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:10.294 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.294 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.294 [2024-04-26 20:47:28.420580] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:10.294 [2024-04-26 20:47:28.420665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003bc0 (9): Bad file descriptor 00:30:10.294 [2024-04-26 20:47:28.552498] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:10.294 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.294 20:47:28 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:10.294 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.294 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.294 [ 00:30:10.294 { 00:30:10.294 "name": "nvme0n1", 00:30:10.294 "aliases": [ 00:30:10.294 "be937024-bde8-4c07-8338-a7c90cabfeae" 00:30:10.294 ], 00:30:10.294 "product_name": "NVMe disk", 00:30:10.294 "block_size": 512, 00:30:10.294 "num_blocks": 2097152, 00:30:10.294 "uuid": "be937024-bde8-4c07-8338-a7c90cabfeae", 00:30:10.294 "assigned_rate_limits": { 00:30:10.294 "rw_ios_per_sec": 0, 00:30:10.294 "rw_mbytes_per_sec": 0, 00:30:10.294 "r_mbytes_per_sec": 0, 00:30:10.294 "w_mbytes_per_sec": 0 00:30:10.294 }, 00:30:10.294 "claimed": false, 00:30:10.294 "zoned": false, 00:30:10.294 "supported_io_types": { 00:30:10.294 "read": true, 00:30:10.294 "write": true, 00:30:10.294 "unmap": false, 00:30:10.294 "write_zeroes": true, 00:30:10.294 "flush": true, 00:30:10.294 "reset": true, 00:30:10.294 "compare": true, 00:30:10.294 "compare_and_write": true, 00:30:10.294 "abort": true, 00:30:10.294 "nvme_admin": true, 00:30:10.294 "nvme_io": true 00:30:10.294 }, 00:30:10.294 "driver_specific": { 00:30:10.294 "nvme": [ 00:30:10.294 { 00:30:10.294 "trid": { 00:30:10.294 "trtype": "TCP", 00:30:10.294 "adrfam": "IPv4", 00:30:10.294 "traddr": "10.0.0.2", 00:30:10.294 "trsvcid": "4420", 00:30:10.294 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:10.294 }, 00:30:10.294 "ctrlr_data": { 00:30:10.294 "cntlid": 2, 00:30:10.294 "vendor_id": "0x8086", 00:30:10.294 "model_number": "SPDK bdev Controller", 00:30:10.294 "serial_number": "00000000000000000000", 00:30:10.294 "firmware_revision": "24.01.1", 00:30:10.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.294 "oacs": { 00:30:10.294 "security": 0, 00:30:10.294 "format": 0, 00:30:10.294 "firmware": 0, 00:30:10.294 "ns_manage": 0 00:30:10.294 }, 00:30:10.294 "multi_ctrlr": true, 00:30:10.294 "ana_reporting": false 00:30:10.294 }, 00:30:10.294 "vs": { 00:30:10.294 "nvme_version": "1.3" 00:30:10.294 }, 00:30:10.294 "ns_data": { 00:30:10.294 "id": 1, 00:30:10.294 "can_share": true 00:30:10.294 } 00:30:10.294 } 00:30:10.294 ], 00:30:10.294 "mp_policy": "active_passive" 00:30:10.294 } 00:30:10.294 } 00:30:10.294 ] 00:30:10.294 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.294 20:47:28 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.294 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.294 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.294 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.294 20:47:28 -- host/async_init.sh@53 -- # mktemp 00:30:10.294 20:47:28 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.gKsh0w7Bnw 00:30:10.294 20:47:28 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:10.295 20:47:28 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.gKsh0w7Bnw 00:30:10.295 20:47:28 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:10.295 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.295 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.295 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.295 20:47:28 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:10.295 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.295 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.295 [2024-04-26 20:47:28.604772] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:10.295 [2024-04-26 20:47:28.604919] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:10.295 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.295 20:47:28 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gKsh0w7Bnw 00:30:10.295 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.295 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.295 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.295 20:47:28 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gKsh0w7Bnw 00:30:10.295 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.295 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.295 [2024-04-26 20:47:28.620752] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:10.555 nvme0n1 00:30:10.555 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.555 20:47:28 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:10.555 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.555 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.555 [ 00:30:10.555 { 00:30:10.555 "name": "nvme0n1", 00:30:10.555 "aliases": [ 00:30:10.555 "be937024-bde8-4c07-8338-a7c90cabfeae" 00:30:10.555 ], 00:30:10.555 "product_name": "NVMe disk", 00:30:10.555 "block_size": 512, 00:30:10.555 "num_blocks": 2097152, 00:30:10.555 "uuid": "be937024-bde8-4c07-8338-a7c90cabfeae", 00:30:10.555 "assigned_rate_limits": { 00:30:10.555 "rw_ios_per_sec": 0, 00:30:10.555 "rw_mbytes_per_sec": 0, 00:30:10.555 "r_mbytes_per_sec": 0, 00:30:10.555 "w_mbytes_per_sec": 0 00:30:10.555 }, 00:30:10.555 "claimed": false, 00:30:10.555 "zoned": false, 00:30:10.555 "supported_io_types": { 00:30:10.555 "read": true, 00:30:10.555 "write": true, 00:30:10.555 "unmap": false, 00:30:10.555 "write_zeroes": true, 00:30:10.555 "flush": true, 00:30:10.555 "reset": true, 00:30:10.555 "compare": true, 00:30:10.555 "compare_and_write": true, 00:30:10.555 "abort": true, 00:30:10.555 "nvme_admin": true, 00:30:10.555 "nvme_io": true 00:30:10.555 }, 00:30:10.555 "driver_specific": { 00:30:10.555 "nvme": [ 00:30:10.555 { 00:30:10.555 "trid": { 00:30:10.555 "trtype": "TCP", 00:30:10.555 "adrfam": "IPv4", 00:30:10.555 "traddr": "10.0.0.2", 00:30:10.555 "trsvcid": "4421", 00:30:10.555 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:10.555 }, 00:30:10.555 "ctrlr_data": { 00:30:10.555 "cntlid": 3, 00:30:10.555 "vendor_id": "0x8086", 00:30:10.555 "model_number": "SPDK bdev Controller", 00:30:10.555 "serial_number": "00000000000000000000", 00:30:10.555 "firmware_revision": "24.01.1", 00:30:10.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.555 "oacs": { 00:30:10.555 "security": 0, 00:30:10.555 "format": 0, 00:30:10.555 "firmware": 0, 00:30:10.555 "ns_manage": 0 00:30:10.555 }, 00:30:10.555 "multi_ctrlr": true, 00:30:10.555 "ana_reporting": false 00:30:10.555 }, 00:30:10.555 "vs": { 00:30:10.555 "nvme_version": "1.3" 00:30:10.555 }, 00:30:10.555 "ns_data": { 00:30:10.555 "id": 1, 00:30:10.555 "can_share": true 00:30:10.555 } 00:30:10.555 } 00:30:10.555 ], 00:30:10.555 "mp_policy": "active_passive" 00:30:10.555 } 00:30:10.555 } 00:30:10.555 ] 00:30:10.555 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.555 20:47:28 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.555 20:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.555 20:47:28 -- common/autotest_common.sh@10 -- # set +x 00:30:10.555 20:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.555 20:47:28 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.gKsh0w7Bnw 00:30:10.555 20:47:28 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:10.555 20:47:28 -- host/async_init.sh@78 -- # nvmftestfini 00:30:10.555 20:47:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:10.555 20:47:28 -- nvmf/common.sh@116 -- # sync 00:30:10.555 20:47:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:10.555 20:47:28 -- nvmf/common.sh@119 -- # set +e 00:30:10.555 20:47:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:10.555 20:47:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:10.555 rmmod nvme_tcp 00:30:10.555 rmmod nvme_fabrics 00:30:10.555 rmmod nvme_keyring 00:30:10.555 20:47:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:10.555 20:47:28 -- nvmf/common.sh@123 -- # set -e 00:30:10.555 20:47:28 -- nvmf/common.sh@124 -- # return 0 00:30:10.555 20:47:28 -- nvmf/common.sh@477 -- # '[' -n 3703660 ']' 00:30:10.555 20:47:28 -- nvmf/common.sh@478 -- # killprocess 3703660 00:30:10.555 20:47:28 -- common/autotest_common.sh@926 -- # '[' -z 3703660 ']' 00:30:10.555 20:47:28 -- common/autotest_common.sh@930 -- # kill -0 3703660 00:30:10.555 20:47:28 -- common/autotest_common.sh@931 -- # uname 00:30:10.555 20:47:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:10.555 20:47:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3703660 00:30:10.555 20:47:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:10.555 20:47:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:10.555 20:47:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3703660' 00:30:10.555 killing process with pid 3703660 00:30:10.555 20:47:28 -- common/autotest_common.sh@945 -- # kill 3703660 00:30:10.555 20:47:28 -- common/autotest_common.sh@950 -- # wait 3703660 00:30:11.122 20:47:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:11.122 20:47:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:11.122 20:47:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:11.122 20:47:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.122 20:47:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:11.122 20:47:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.122 20:47:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.122 20:47:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.032 20:47:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:13.032 00:30:13.032 real 0m9.582s 00:30:13.032 user 0m3.543s 00:30:13.032 sys 0m4.410s 00:30:13.032 20:47:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.032 20:47:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.032 ************************************ 00:30:13.032 END TEST nvmf_async_init 00:30:13.032 ************************************ 00:30:13.032 20:47:31 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:13.032 20:47:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:13.032 20:47:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.032 20:47:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.032 ************************************ 00:30:13.032 START TEST dma 00:30:13.032 ************************************ 00:30:13.032 20:47:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:13.305 * Looking for test storage... 00:30:13.305 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:13.305 20:47:31 -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.305 20:47:31 -- nvmf/common.sh@7 -- # uname -s 00:30:13.305 20:47:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.305 20:47:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.305 20:47:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.305 20:47:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.305 20:47:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.305 20:47:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.305 20:47:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.305 20:47:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.305 20:47:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.305 20:47:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.305 20:47:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:30:13.305 20:47:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:30:13.305 20:47:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.305 20:47:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.305 20:47:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:13.305 20:47:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:13.305 20:47:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.305 20:47:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.305 20:47:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.305 20:47:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.305 20:47:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.306 20:47:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.306 20:47:31 -- paths/export.sh@5 -- # export PATH 00:30:13.306 20:47:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.306 20:47:31 -- nvmf/common.sh@46 -- # : 0 00:30:13.306 20:47:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:13.306 20:47:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:13.306 20:47:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:13.306 20:47:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.306 20:47:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.306 20:47:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:13.306 20:47:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:13.306 20:47:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:13.307 20:47:31 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:13.307 20:47:31 -- host/dma.sh@13 -- # exit 0 00:30:13.307 00:30:13.307 real 0m0.078s 00:30:13.307 user 0m0.032s 00:30:13.307 sys 0m0.050s 00:30:13.307 20:47:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.307 20:47:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.307 ************************************ 00:30:13.307 END TEST dma 00:30:13.307 ************************************ 00:30:13.307 20:47:31 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:13.307 20:47:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:13.307 20:47:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.307 20:47:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.307 ************************************ 00:30:13.307 START TEST nvmf_identify 00:30:13.307 ************************************ 00:30:13.307 20:47:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:13.307 * Looking for test storage... 00:30:13.307 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:13.307 20:47:31 -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.307 20:47:31 -- nvmf/common.sh@7 -- # uname -s 00:30:13.307 20:47:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.307 20:47:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.307 20:47:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.307 20:47:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.307 20:47:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.307 20:47:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.307 20:47:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.307 20:47:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.307 20:47:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.307 20:47:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.307 20:47:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:30:13.307 20:47:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:30:13.307 20:47:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.308 20:47:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.308 20:47:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:13.308 20:47:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:13.308 20:47:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.308 20:47:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.308 20:47:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.308 20:47:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.308 20:47:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.308 20:47:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.308 20:47:31 -- paths/export.sh@5 -- # export PATH 00:30:13.308 20:47:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.308 20:47:31 -- nvmf/common.sh@46 -- # : 0 00:30:13.308 20:47:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:13.308 20:47:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:13.308 20:47:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:13.308 20:47:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.308 20:47:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.308 20:47:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:13.309 20:47:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:13.309 20:47:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:13.309 20:47:31 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.309 20:47:31 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:13.309 20:47:31 -- host/identify.sh@14 -- # nvmftestinit 00:30:13.309 20:47:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:13.309 20:47:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.309 20:47:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:13.309 20:47:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:13.309 20:47:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:13.309 20:47:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.309 20:47:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.309 20:47:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.309 20:47:31 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:30:13.309 20:47:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:13.309 20:47:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:13.309 20:47:31 -- common/autotest_common.sh@10 -- # set +x 00:30:18.593 20:47:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:18.593 20:47:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:18.593 20:47:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:18.593 20:47:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:18.593 20:47:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:18.593 20:47:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:18.593 20:47:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:18.593 20:47:36 -- nvmf/common.sh@294 -- # net_devs=() 00:30:18.593 20:47:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:18.593 20:47:36 -- nvmf/common.sh@295 -- # e810=() 00:30:18.593 20:47:36 -- nvmf/common.sh@295 -- # local -ga e810 00:30:18.593 20:47:36 -- nvmf/common.sh@296 -- # x722=() 00:30:18.593 20:47:36 -- nvmf/common.sh@296 -- # local -ga x722 00:30:18.593 20:47:36 -- nvmf/common.sh@297 -- # mlx=() 00:30:18.593 20:47:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:18.593 20:47:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.593 20:47:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:18.593 20:47:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:18.593 20:47:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:18.593 20:47:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:18.593 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:18.593 20:47:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:18.593 20:47:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:18.593 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:18.593 20:47:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:18.593 20:47:36 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:18.593 20:47:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.593 20:47:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:18.593 20:47:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.593 20:47:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:18.593 Found net devices under 0000:27:00.0: cvl_0_0 00:30:18.593 20:47:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.593 20:47:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:18.593 20:47:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.593 20:47:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:18.593 20:47:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.593 20:47:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:18.593 Found net devices under 0000:27:00.1: cvl_0_1 00:30:18.593 20:47:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.593 20:47:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:18.593 20:47:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:18.593 20:47:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:18.593 20:47:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:18.593 20:47:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.593 20:47:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.593 20:47:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.593 20:47:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:18.593 20:47:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.593 20:47:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.593 20:47:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:18.593 20:47:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.593 20:47:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.593 20:47:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:18.593 20:47:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:18.594 20:47:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.594 20:47:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.594 20:47:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.594 20:47:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.594 20:47:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:18.594 20:47:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.594 20:47:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.594 20:47:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.594 20:47:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:18.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:30:18.854 00:30:18.854 --- 10.0.0.2 ping statistics --- 00:30:18.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.854 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:30:18.854 20:47:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:30:18.854 00:30:18.854 --- 10.0.0.1 ping statistics --- 00:30:18.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.854 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:30:18.854 20:47:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.854 20:47:36 -- nvmf/common.sh@410 -- # return 0 00:30:18.854 20:47:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:18.854 20:47:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.854 20:47:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:18.854 20:47:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:18.854 20:47:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.854 20:47:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:18.854 20:47:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:18.854 20:47:36 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:18.854 20:47:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:18.854 20:47:36 -- common/autotest_common.sh@10 -- # set +x 00:30:18.854 20:47:36 -- host/identify.sh@19 -- # nvmfpid=3707925 00:30:18.854 20:47:36 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:18.854 20:47:36 -- host/identify.sh@23 -- # waitforlisten 3707925 00:30:18.854 20:47:36 -- common/autotest_common.sh@819 -- # '[' -z 3707925 ']' 00:30:18.854 20:47:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.854 20:47:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:18.854 20:47:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.854 20:47:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:18.854 20:47:36 -- common/autotest_common.sh@10 -- # set +x 00:30:18.854 20:47:36 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:18.854 [2024-04-26 20:47:37.061545] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:18.854 [2024-04-26 20:47:37.061692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.854 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.113 [2024-04-26 20:47:37.202289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:19.113 [2024-04-26 20:47:37.315014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:19.113 [2024-04-26 20:47:37.315210] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.113 [2024-04-26 20:47:37.315225] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.113 [2024-04-26 20:47:37.315235] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.113 [2024-04-26 20:47:37.315295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.113 [2024-04-26 20:47:37.315415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.113 [2024-04-26 20:47:37.315505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.113 [2024-04-26 20:47:37.315515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:19.679 20:47:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:19.679 20:47:37 -- common/autotest_common.sh@852 -- # return 0 00:30:19.679 20:47:37 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:19.679 20:47:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.679 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.679 [2024-04-26 20:47:37.765535] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.679 20:47:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.679 20:47:37 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:19.679 20:47:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:19.679 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.679 20:47:37 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:19.679 20:47:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.679 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.679 Malloc0 00:30:19.679 20:47:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.679 20:47:37 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:19.679 20:47:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.679 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.679 20:47:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.679 20:47:37 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:19.679 20:47:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.679 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.679 20:47:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.679 20:47:37 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.679 20:47:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.679 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.679 [2024-04-26 20:47:37.863502] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.679 20:47:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.679 20:47:37 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.679 20:47:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.679 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.679 20:47:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.679 20:47:37 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:19.679 20:47:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.679 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.679 [2024-04-26 20:47:37.879246] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:19.679 [ 00:30:19.679 { 00:30:19.679 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:19.679 "subtype": "Discovery", 00:30:19.679 "listen_addresses": [ 00:30:19.679 { 00:30:19.679 "transport": "TCP", 00:30:19.679 "trtype": "TCP", 00:30:19.679 "adrfam": "IPv4", 00:30:19.679 "traddr": "10.0.0.2", 00:30:19.679 "trsvcid": "4420" 00:30:19.679 } 00:30:19.679 ], 00:30:19.679 "allow_any_host": true, 00:30:19.679 "hosts": [] 00:30:19.679 }, 00:30:19.679 { 00:30:19.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.679 "subtype": "NVMe", 00:30:19.679 "listen_addresses": [ 00:30:19.679 { 00:30:19.679 "transport": "TCP", 00:30:19.679 "trtype": "TCP", 00:30:19.679 "adrfam": "IPv4", 00:30:19.679 "traddr": "10.0.0.2", 00:30:19.679 "trsvcid": "4420" 00:30:19.679 } 00:30:19.679 ], 00:30:19.679 "allow_any_host": true, 00:30:19.679 "hosts": [], 00:30:19.679 "serial_number": "SPDK00000000000001", 00:30:19.679 "model_number": "SPDK bdev Controller", 00:30:19.679 "max_namespaces": 32, 00:30:19.679 "min_cntlid": 1, 00:30:19.679 "max_cntlid": 65519, 00:30:19.679 "namespaces": [ 00:30:19.679 { 00:30:19.679 "nsid": 1, 00:30:19.679 "bdev_name": "Malloc0", 00:30:19.679 "name": "Malloc0", 00:30:19.679 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:19.679 "eui64": "ABCDEF0123456789", 00:30:19.679 "uuid": "8c164a9c-2a08-4233-940b-afeaea4e4701" 00:30:19.679 } 00:30:19.679 ] 00:30:19.679 } 00:30:19.679 ] 00:30:19.679 20:47:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.679 20:47:37 -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:19.679 [2024-04-26 20:47:37.922082] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:19.679 [2024-04-26 20:47:37.922163] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3707980 ] 00:30:19.679 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.679 [2024-04-26 20:47:37.973480] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:19.679 [2024-04-26 20:47:37.973561] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:19.679 [2024-04-26 20:47:37.973572] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:19.679 [2024-04-26 20:47:37.973591] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:19.679 [2024-04-26 20:47:37.973605] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:19.679 [2024-04-26 20:47:37.973966] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:19.679 [2024-04-26 20:47:37.974008] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613000001fc0 0 00:30:19.679 [2024-04-26 20:47:37.988396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:19.679 [2024-04-26 20:47:37.988414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:19.679 [2024-04-26 20:47:37.988421] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:19.679 [2024-04-26 20:47:37.988426] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:19.679 [2024-04-26 20:47:37.988469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.988476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.988486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.988507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:19.679 [2024-04-26 20:47:37.988531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.996398] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.996415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.996421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.996440] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:19.679 [2024-04-26 20:47:37.996452] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:19.679 [2024-04-26 20:47:37.996460] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:19.679 [2024-04-26 20:47:37.996482] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.996510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.679 [2024-04-26 20:47:37.996531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.996681] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.996690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.996700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.996713] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:19.679 [2024-04-26 20:47:37.996723] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:19.679 [2024-04-26 20:47:37.996734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.996755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.679 [2024-04-26 20:47:37.996767] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.996873] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.996881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.996885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.996896] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:19.679 [2024-04-26 20:47:37.996904] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:19.679 [2024-04-26 20:47:37.996912] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.996924] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.996933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.679 [2024-04-26 20:47:37.996945] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.997050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.997057] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.997060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.997071] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:19.679 [2024-04-26 20:47:37.997084] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997089] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997093] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.997104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.679 [2024-04-26 20:47:37.997114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.997221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.997227] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.997231] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997235] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.997242] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:19.679 [2024-04-26 20:47:37.997250] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:19.679 [2024-04-26 20:47:37.997258] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:19.679 [2024-04-26 20:47:37.997365] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:19.679 [2024-04-26 20:47:37.997370] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:19.679 [2024-04-26 20:47:37.997393] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997398] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.997413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.679 [2024-04-26 20:47:37.997423] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.997527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.997534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.997538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.997549] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:19.679 [2024-04-26 20:47:37.997561] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997566] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.997580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.679 [2024-04-26 20:47:37.997590] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.997690] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.997696] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.997700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.997711] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:19.679 [2024-04-26 20:47:37.997718] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:19.679 [2024-04-26 20:47:37.997726] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:19.679 [2024-04-26 20:47:37.997737] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:19.679 [2024-04-26 20:47:37.997753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.997775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.679 [2024-04-26 20:47:37.997785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.997923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.679 [2024-04-26 20:47:37.997932] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.679 [2024-04-26 20:47:37.997939] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997946] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=0 00:30:19.679 [2024-04-26 20:47:37.997956] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:30:19.679 [2024-04-26 20:47:37.997978] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.997984] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.998056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.998060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.998077] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:19.679 [2024-04-26 20:47:37.998085] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:19.679 [2024-04-26 20:47:37.998093] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:19.679 [2024-04-26 20:47:37.998099] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:19.679 [2024-04-26 20:47:37.998107] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:19.679 [2024-04-26 20:47:37.998113] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:19.679 [2024-04-26 20:47:37.998121] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:19.679 [2024-04-26 20:47:37.998131] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.998152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:19.679 [2024-04-26 20:47:37.998164] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.679 [2024-04-26 20:47:37.998274] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.679 [2024-04-26 20:47:37.998281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.679 [2024-04-26 20:47:37.998285] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.679 [2024-04-26 20:47:37.998300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.998319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.679 [2024-04-26 20:47:37.998325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998330] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.679 [2024-04-26 20:47:37.998334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613000001fc0) 00:30:19.679 [2024-04-26 20:47:37.998341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.679 [2024-04-26 20:47:37.998349] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998353] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613000001fc0) 00:30:19.680 [2024-04-26 20:47:37.998364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.680 [2024-04-26 20:47:37.998370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998374] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998378] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.680 [2024-04-26 20:47:37.998391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.680 [2024-04-26 20:47:37.998397] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:19.680 [2024-04-26 20:47:37.998406] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:19.680 [2024-04-26 20:47:37.998414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998423] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.680 [2024-04-26 20:47:37.998436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.680 [2024-04-26 20:47:37.998448] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.680 [2024-04-26 20:47:37.998453] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:30:19.680 [2024-04-26 20:47:37.998458] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:30:19.680 [2024-04-26 20:47:37.998463] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.680 [2024-04-26 20:47:37.998468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.680 [2024-04-26 20:47:37.998603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.680 [2024-04-26 20:47:37.998610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.680 [2024-04-26 20:47:37.998614] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.680 [2024-04-26 20:47:37.998625] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:19.680 [2024-04-26 20:47:37.998633] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:19.680 [2024-04-26 20:47:37.998647] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998655] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.680 [2024-04-26 20:47:37.998669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.680 [2024-04-26 20:47:37.998679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.680 [2024-04-26 20:47:37.998795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.680 [2024-04-26 20:47:37.998806] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.680 [2024-04-26 20:47:37.998814] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998821] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:30:19.680 [2024-04-26 20:47:37.998831] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:30:19.680 [2024-04-26 20:47:37.998848] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.680 [2024-04-26 20:47:37.998854] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.039627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.942 [2024-04-26 20:47:38.039643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.942 [2024-04-26 20:47:38.039647] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.039653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.942 [2024-04-26 20:47:38.039673] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:19.942 [2024-04-26 20:47:38.039713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.039720] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.039726] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.942 [2024-04-26 20:47:38.039736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.942 [2024-04-26 20:47:38.039745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.039750] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.039755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:30:19.942 [2024-04-26 20:47:38.039762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.942 [2024-04-26 20:47:38.039779] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.942 [2024-04-26 20:47:38.039785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:30:19.942 [2024-04-26 20:47:38.040002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.942 [2024-04-26 20:47:38.040010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.942 [2024-04-26 20:47:38.040015] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.040022] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=1024, cccid=4 00:30:19.942 [2024-04-26 20:47:38.040029] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=1024 00:30:19.942 [2024-04-26 20:47:38.040038] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.040042] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.040049] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.942 [2024-04-26 20:47:38.040059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.942 [2024-04-26 20:47:38.040063] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.040068] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:30:19.942 [2024-04-26 20:47:38.084397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.942 [2024-04-26 20:47:38.084411] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.942 [2024-04-26 20:47:38.084416] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.084421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.942 [2024-04-26 20:47:38.084442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.084447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.942 [2024-04-26 20:47:38.084452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.942 [2024-04-26 20:47:38.084463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.942 [2024-04-26 20:47:38.084482] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.942 [2024-04-26 20:47:38.084615] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.942 [2024-04-26 20:47:38.084625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.942 [2024-04-26 20:47:38.084632] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.084639] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=3072, cccid=4 00:30:19.943 [2024-04-26 20:47:38.084647] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=3072 00:30:19.943 [2024-04-26 20:47:38.084663] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.084668] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.084737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.943 [2024-04-26 20:47:38.084744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.943 [2024-04-26 20:47:38.084747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.084752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.943 [2024-04-26 20:47:38.084763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.084768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.084778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.943 [2024-04-26 20:47:38.084787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.943 [2024-04-26 20:47:38.084799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.943 [2024-04-26 20:47:38.084922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.943 [2024-04-26 20:47:38.084929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.943 [2024-04-26 20:47:38.084933] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.084937] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=8, cccid=4 00:30:19.943 [2024-04-26 20:47:38.084942] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=8 00:30:19.943 [2024-04-26 20:47:38.084951] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.084955] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.125620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.943 [2024-04-26 20:47:38.125641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.943 [2024-04-26 20:47:38.125645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.943 [2024-04-26 20:47:38.125650] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.943 ===================================================== 00:30:19.943 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:19.943 ===================================================== 00:30:19.943 Controller Capabilities/Features 00:30:19.943 ================================ 00:30:19.943 Vendor ID: 0000 00:30:19.943 Subsystem Vendor ID: 0000 00:30:19.943 Serial Number: .................... 00:30:19.943 Model Number: ........................................ 00:30:19.943 Firmware Version: 24.01.1 00:30:19.943 Recommended Arb Burst: 0 00:30:19.943 IEEE OUI Identifier: 00 00 00 00:30:19.943 Multi-path I/O 00:30:19.943 May have multiple subsystem ports: No 00:30:19.943 May have multiple controllers: No 00:30:19.943 Associated with SR-IOV VF: No 00:30:19.943 Max Data Transfer Size: 131072 00:30:19.943 Max Number of Namespaces: 0 00:30:19.943 Max Number of I/O Queues: 1024 00:30:19.943 NVMe Specification Version (VS): 1.3 00:30:19.943 NVMe Specification Version (Identify): 1.3 00:30:19.943 Maximum Queue Entries: 128 00:30:19.943 Contiguous Queues Required: Yes 00:30:19.943 Arbitration Mechanisms Supported 00:30:19.943 Weighted Round Robin: Not Supported 00:30:19.943 Vendor Specific: Not Supported 00:30:19.943 Reset Timeout: 15000 ms 00:30:19.943 Doorbell Stride: 4 bytes 00:30:19.943 NVM Subsystem Reset: Not Supported 00:30:19.943 Command Sets Supported 00:30:19.943 NVM Command Set: Supported 00:30:19.943 Boot Partition: Not Supported 00:30:19.943 Memory Page Size Minimum: 4096 bytes 00:30:19.943 Memory Page Size Maximum: 4096 bytes 00:30:19.943 Persistent Memory Region: Not Supported 00:30:19.943 Optional Asynchronous Events Supported 00:30:19.943 Namespace Attribute Notices: Not Supported 00:30:19.943 Firmware Activation Notices: Not Supported 00:30:19.943 ANA Change Notices: Not Supported 00:30:19.943 PLE Aggregate Log Change Notices: Not Supported 00:30:19.943 LBA Status Info Alert Notices: Not Supported 00:30:19.943 EGE Aggregate Log Change Notices: Not Supported 00:30:19.943 Normal NVM Subsystem Shutdown event: Not Supported 00:30:19.943 Zone Descriptor Change Notices: Not Supported 00:30:19.943 Discovery Log Change Notices: Supported 00:30:19.943 Controller Attributes 00:30:19.943 128-bit Host Identifier: Not Supported 00:30:19.943 Non-Operational Permissive Mode: Not Supported 00:30:19.943 NVM Sets: Not Supported 00:30:19.943 Read Recovery Levels: Not Supported 00:30:19.943 Endurance Groups: Not Supported 00:30:19.943 Predictable Latency Mode: Not Supported 00:30:19.943 Traffic Based Keep ALive: Not Supported 00:30:19.943 Namespace Granularity: Not Supported 00:30:19.943 SQ Associations: Not Supported 00:30:19.943 UUID List: Not Supported 00:30:19.943 Multi-Domain Subsystem: Not Supported 00:30:19.943 Fixed Capacity Management: Not Supported 00:30:19.943 Variable Capacity Management: Not Supported 00:30:19.943 Delete Endurance Group: Not Supported 00:30:19.943 Delete NVM Set: Not Supported 00:30:19.943 Extended LBA Formats Supported: Not Supported 00:30:19.943 Flexible Data Placement Supported: Not Supported 00:30:19.943 00:30:19.943 Controller Memory Buffer Support 00:30:19.943 ================================ 00:30:19.943 Supported: No 00:30:19.943 00:30:19.943 Persistent Memory Region Support 00:30:19.943 ================================ 00:30:19.943 Supported: No 00:30:19.943 00:30:19.943 Admin Command Set Attributes 00:30:19.943 ============================ 00:30:19.943 Security Send/Receive: Not Supported 00:30:19.943 Format NVM: Not Supported 00:30:19.943 Firmware Activate/Download: Not Supported 00:30:19.943 Namespace Management: Not Supported 00:30:19.943 Device Self-Test: Not Supported 00:30:19.943 Directives: Not Supported 00:30:19.943 NVMe-MI: Not Supported 00:30:19.943 Virtualization Management: Not Supported 00:30:19.943 Doorbell Buffer Config: Not Supported 00:30:19.943 Get LBA Status Capability: Not Supported 00:30:19.943 Command & Feature Lockdown Capability: Not Supported 00:30:19.943 Abort Command Limit: 1 00:30:19.943 Async Event Request Limit: 4 00:30:19.943 Number of Firmware Slots: N/A 00:30:19.943 Firmware Slot 1 Read-Only: N/A 00:30:19.943 Firmware Activation Without Reset: N/A 00:30:19.943 Multiple Update Detection Support: N/A 00:30:19.943 Firmware Update Granularity: No Information Provided 00:30:19.943 Per-Namespace SMART Log: No 00:30:19.943 Asymmetric Namespace Access Log Page: Not Supported 00:30:19.943 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:19.943 Command Effects Log Page: Not Supported 00:30:19.943 Get Log Page Extended Data: Supported 00:30:19.943 Telemetry Log Pages: Not Supported 00:30:19.943 Persistent Event Log Pages: Not Supported 00:30:19.943 Supported Log Pages Log Page: May Support 00:30:19.943 Commands Supported & Effects Log Page: Not Supported 00:30:19.943 Feature Identifiers & Effects Log Page:May Support 00:30:19.943 NVMe-MI Commands & Effects Log Page: May Support 00:30:19.943 Data Area 4 for Telemetry Log: Not Supported 00:30:19.943 Error Log Page Entries Supported: 128 00:30:19.943 Keep Alive: Not Supported 00:30:19.943 00:30:19.943 NVM Command Set Attributes 00:30:19.943 ========================== 00:30:19.943 Submission Queue Entry Size 00:30:19.943 Max: 1 00:30:19.943 Min: 1 00:30:19.943 Completion Queue Entry Size 00:30:19.943 Max: 1 00:30:19.943 Min: 1 00:30:19.943 Number of Namespaces: 0 00:30:19.943 Compare Command: Not Supported 00:30:19.943 Write Uncorrectable Command: Not Supported 00:30:19.943 Dataset Management Command: Not Supported 00:30:19.943 Write Zeroes Command: Not Supported 00:30:19.943 Set Features Save Field: Not Supported 00:30:19.943 Reservations: Not Supported 00:30:19.943 Timestamp: Not Supported 00:30:19.943 Copy: Not Supported 00:30:19.943 Volatile Write Cache: Not Present 00:30:19.943 Atomic Write Unit (Normal): 1 00:30:19.943 Atomic Write Unit (PFail): 1 00:30:19.943 Atomic Compare & Write Unit: 1 00:30:19.943 Fused Compare & Write: Supported 00:30:19.943 Scatter-Gather List 00:30:19.943 SGL Command Set: Supported 00:30:19.943 SGL Keyed: Supported 00:30:19.943 SGL Bit Bucket Descriptor: Not Supported 00:30:19.943 SGL Metadata Pointer: Not Supported 00:30:19.943 Oversized SGL: Not Supported 00:30:19.943 SGL Metadata Address: Not Supported 00:30:19.943 SGL Offset: Supported 00:30:19.943 Transport SGL Data Block: Not Supported 00:30:19.943 Replay Protected Memory Block: Not Supported 00:30:19.943 00:30:19.943 Firmware Slot Information 00:30:19.943 ========================= 00:30:19.943 Active slot: 0 00:30:19.943 00:30:19.943 00:30:19.943 Error Log 00:30:19.943 ========= 00:30:19.943 00:30:19.943 Active Namespaces 00:30:19.943 ================= 00:30:19.943 Discovery Log Page 00:30:19.943 ================== 00:30:19.943 Generation Counter: 2 00:30:19.943 Number of Records: 2 00:30:19.943 Record Format: 0 00:30:19.943 00:30:19.943 Discovery Log Entry 0 00:30:19.943 ---------------------- 00:30:19.943 Transport Type: 3 (TCP) 00:30:19.943 Address Family: 1 (IPv4) 00:30:19.943 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:19.943 Entry Flags: 00:30:19.943 Duplicate Returned Information: 1 00:30:19.943 Explicit Persistent Connection Support for Discovery: 1 00:30:19.943 Transport Requirements: 00:30:19.943 Secure Channel: Not Required 00:30:19.944 Port ID: 0 (0x0000) 00:30:19.944 Controller ID: 65535 (0xffff) 00:30:19.944 Admin Max SQ Size: 128 00:30:19.944 Transport Service Identifier: 4420 00:30:19.944 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:19.944 Transport Address: 10.0.0.2 00:30:19.944 Discovery Log Entry 1 00:30:19.944 ---------------------- 00:30:19.944 Transport Type: 3 (TCP) 00:30:19.944 Address Family: 1 (IPv4) 00:30:19.944 Subsystem Type: 2 (NVM Subsystem) 00:30:19.944 Entry Flags: 00:30:19.944 Duplicate Returned Information: 0 00:30:19.944 Explicit Persistent Connection Support for Discovery: 0 00:30:19.944 Transport Requirements: 00:30:19.944 Secure Channel: Not Required 00:30:19.944 Port ID: 0 (0x0000) 00:30:19.944 Controller ID: 65535 (0xffff) 00:30:19.944 Admin Max SQ Size: 128 00:30:19.944 Transport Service Identifier: 4420 00:30:19.944 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:19.944 Transport Address: 10.0.0.2 [2024-04-26 20:47:38.125770] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:19.944 [2024-04-26 20:47:38.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.944 [2024-04-26 20:47:38.125794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.944 [2024-04-26 20:47:38.125801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.944 [2024-04-26 20:47:38.125807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.944 [2024-04-26 20:47:38.125820] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.125825] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.125832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.125843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.125861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.125973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.125981] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.125986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.125991] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.126001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.126022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.126035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.126158] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.126165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.126169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126174] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.126180] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:19.944 [2024-04-26 20:47:38.126186] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:19.944 [2024-04-26 20:47:38.126196] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126202] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.126216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.126228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.126334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.126341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.126345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.126358] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126363] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126367] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.126375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.126390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.126509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.126515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.126519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126523] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.126532] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.126549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.126558] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.126659] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.126666] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.126669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126674] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.126683] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126696] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.126703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.126713] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.126816] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.126822] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.126826] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126830] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.126839] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126843] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126846] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.126854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.126864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.126958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.126964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.126968] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.126981] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126985] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.126990] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.126999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.127009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.127116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.127123] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.127127] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.127131] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.127140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.127144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.127148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.127156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.127166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.127265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.127271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.127275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.127280] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.944 [2024-04-26 20:47:38.127289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.127293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.944 [2024-04-26 20:47:38.127297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.944 [2024-04-26 20:47:38.127305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.944 [2024-04-26 20:47:38.127314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.944 [2024-04-26 20:47:38.127414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.944 [2024-04-26 20:47:38.127421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.944 [2024-04-26 20:47:38.127425] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127429] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.127438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127442] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127446] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.127454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.127464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.945 [2024-04-26 20:47:38.127560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.127567] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.127571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.127584] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.127600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.127609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.945 [2024-04-26 20:47:38.127717] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.127725] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.127735] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127739] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.127749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127752] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.127765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.127774] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.945 [2024-04-26 20:47:38.127874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.127881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.127885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.127898] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.127906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.127914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.127923] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.945 [2024-04-26 20:47:38.128026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.128032] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.128036] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128040] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.128049] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128053] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128058] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.128066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.128075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.945 [2024-04-26 20:47:38.128172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.128179] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.128183] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128187] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.128196] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128200] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128204] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.128214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.128223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.945 [2024-04-26 20:47:38.128325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.128331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.128335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128339] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.128348] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128352] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.128357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.128365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.128374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.945 [2024-04-26 20:47:38.132401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.132413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.132417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.132422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.132432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.132436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.132440] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.132449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.132460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.945 [2024-04-26 20:47:38.132571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.132578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.132581] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.132586] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.132593] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:30:19.945 00:30:19.945 20:47:38 -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:19.945 [2024-04-26 20:47:38.199219] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:19.945 [2024-04-26 20:47:38.199306] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3708131 ] 00:30:19.945 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.945 [2024-04-26 20:47:38.250165] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:19.945 [2024-04-26 20:47:38.250242] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:19.945 [2024-04-26 20:47:38.250252] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:19.945 [2024-04-26 20:47:38.250270] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:19.945 [2024-04-26 20:47:38.250281] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:19.945 [2024-04-26 20:47:38.250626] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:19.945 [2024-04-26 20:47:38.250656] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613000001fc0 0 00:30:19.945 [2024-04-26 20:47:38.257395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:19.945 [2024-04-26 20:47:38.257410] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:19.945 [2024-04-26 20:47:38.257417] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:19.945 [2024-04-26 20:47:38.257422] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:19.945 [2024-04-26 20:47:38.257459] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.257466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.257473] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.257493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:19.945 [2024-04-26 20:47:38.257515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.945 [2024-04-26 20:47:38.265395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.265409] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.265414] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.265421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.945 [2024-04-26 20:47:38.265434] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:19.945 [2024-04-26 20:47:38.265445] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:19.945 [2024-04-26 20:47:38.265453] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:19.945 [2024-04-26 20:47:38.265471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.265477] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.945 [2024-04-26 20:47:38.265485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.945 [2024-04-26 20:47:38.265499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.945 [2024-04-26 20:47:38.265518] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.945 [2024-04-26 20:47:38.265712] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.945 [2024-04-26 20:47:38.265719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.945 [2024-04-26 20:47:38.265728] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.265734] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.946 [2024-04-26 20:47:38.265745] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:19.946 [2024-04-26 20:47:38.265754] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:19.946 [2024-04-26 20:47:38.265762] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.265767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.265773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.946 [2024-04-26 20:47:38.265784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.946 [2024-04-26 20:47:38.265795] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.946 [2024-04-26 20:47:38.265973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.946 [2024-04-26 20:47:38.265980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.946 [2024-04-26 20:47:38.265984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.265990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.946 [2024-04-26 20:47:38.265996] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:19.946 [2024-04-26 20:47:38.266005] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:19.946 [2024-04-26 20:47:38.266013] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266018] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.946 [2024-04-26 20:47:38.266032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.946 [2024-04-26 20:47:38.266043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.946 [2024-04-26 20:47:38.266207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.946 [2024-04-26 20:47:38.266213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.946 [2024-04-26 20:47:38.266217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.946 [2024-04-26 20:47:38.266228] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:19.946 [2024-04-26 20:47:38.266240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266250] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.946 [2024-04-26 20:47:38.266259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.946 [2024-04-26 20:47:38.266270] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.946 [2024-04-26 20:47:38.266440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.946 [2024-04-26 20:47:38.266446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.946 [2024-04-26 20:47:38.266450] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266454] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.946 [2024-04-26 20:47:38.266459] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:19.946 [2024-04-26 20:47:38.266467] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:19.946 [2024-04-26 20:47:38.266476] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:19.946 [2024-04-26 20:47:38.266582] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:19.946 [2024-04-26 20:47:38.266587] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:19.946 [2024-04-26 20:47:38.266596] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266602] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.946 [2024-04-26 20:47:38.266618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.946 [2024-04-26 20:47:38.266629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.946 [2024-04-26 20:47:38.266794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.946 [2024-04-26 20:47:38.266800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.946 [2024-04-26 20:47:38.266804] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.946 [2024-04-26 20:47:38.266815] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:19.946 [2024-04-26 20:47:38.266825] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.266836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.946 [2024-04-26 20:47:38.266844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.946 [2024-04-26 20:47:38.266856] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.946 [2024-04-26 20:47:38.267021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.946 [2024-04-26 20:47:38.267028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.946 [2024-04-26 20:47:38.267032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267037] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.946 [2024-04-26 20:47:38.267042] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:19.946 [2024-04-26 20:47:38.267049] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:19.946 [2024-04-26 20:47:38.267060] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:19.946 [2024-04-26 20:47:38.267067] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:19.946 [2024-04-26 20:47:38.267079] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267084] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.946 [2024-04-26 20:47:38.267098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.946 [2024-04-26 20:47:38.267110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.946 [2024-04-26 20:47:38.267328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.946 [2024-04-26 20:47:38.267335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.946 [2024-04-26 20:47:38.267339] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267344] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=0 00:30:19.946 [2024-04-26 20:47:38.267352] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:30:19.946 [2024-04-26 20:47:38.267362] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267367] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.946 [2024-04-26 20:47:38.267481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.946 [2024-04-26 20:47:38.267485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.946 [2024-04-26 20:47:38.267501] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:19.946 [2024-04-26 20:47:38.267507] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:19.946 [2024-04-26 20:47:38.267513] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:19.946 [2024-04-26 20:47:38.267519] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:19.946 [2024-04-26 20:47:38.267526] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:19.946 [2024-04-26 20:47:38.267532] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:19.946 [2024-04-26 20:47:38.267541] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:19.946 [2024-04-26 20:47:38.267551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.946 [2024-04-26 20:47:38.267562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.946 [2024-04-26 20:47:38.267572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:19.946 [2024-04-26 20:47:38.267582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.946 [2024-04-26 20:47:38.267702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.946 [2024-04-26 20:47:38.267708] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.947 [2024-04-26 20:47:38.267714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267719] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:30:19.947 [2024-04-26 20:47:38.267727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:30:19.947 [2024-04-26 20:47:38.267746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.947 [2024-04-26 20:47:38.267753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613000001fc0) 00:30:19.947 [2024-04-26 20:47:38.267769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.947 [2024-04-26 20:47:38.267775] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613000001fc0) 00:30:19.947 [2024-04-26 20:47:38.267791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.947 [2024-04-26 20:47:38.267797] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267805] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.947 [2024-04-26 20:47:38.267812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.947 [2024-04-26 20:47:38.267817] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.267827] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.267835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267839] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.267844] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.947 [2024-04-26 20:47:38.267853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.947 [2024-04-26 20:47:38.267865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:19.947 [2024-04-26 20:47:38.267870] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:30:19.947 [2024-04-26 20:47:38.267875] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:30:19.947 [2024-04-26 20:47:38.267881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.947 [2024-04-26 20:47:38.267887] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.947 [2024-04-26 20:47:38.268032] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.947 [2024-04-26 20:47:38.268039] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.947 [2024-04-26 20:47:38.268043] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268047] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.947 [2024-04-26 20:47:38.268053] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:19.947 [2024-04-26 20:47:38.268060] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.268068] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.268079] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.268085] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268091] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.947 [2024-04-26 20:47:38.268104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:19.947 [2024-04-26 20:47:38.268116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.947 [2024-04-26 20:47:38.268234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.947 [2024-04-26 20:47:38.268241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.947 [2024-04-26 20:47:38.268245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.947 [2024-04-26 20:47:38.268299] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.268310] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.268321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268326] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.947 [2024-04-26 20:47:38.268339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.947 [2024-04-26 20:47:38.268350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.947 [2024-04-26 20:47:38.268488] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.947 [2024-04-26 20:47:38.268494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.947 [2024-04-26 20:47:38.268498] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268503] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:30:19.947 [2024-04-26 20:47:38.268508] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:30:19.947 [2024-04-26 20:47:38.268516] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268520] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268570] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.947 [2024-04-26 20:47:38.268576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.947 [2024-04-26 20:47:38.268579] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268584] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.947 [2024-04-26 20:47:38.268599] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:19.947 [2024-04-26 20:47:38.268613] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.268623] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:19.947 [2024-04-26 20:47:38.268632] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268639] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268644] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.947 [2024-04-26 20:47:38.268654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.947 [2024-04-26 20:47:38.268664] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.947 [2024-04-26 20:47:38.268800] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.947 [2024-04-26 20:47:38.268807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.947 [2024-04-26 20:47:38.268812] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268816] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:30:19.947 [2024-04-26 20:47:38.268821] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:30:19.947 [2024-04-26 20:47:38.268828] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.947 [2024-04-26 20:47:38.268832] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.268886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.948 [2024-04-26 20:47:38.268893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.948 [2024-04-26 20:47:38.268897] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.268901] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.948 [2024-04-26 20:47:38.268916] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:19.948 [2024-04-26 20:47:38.268925] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:19.948 [2024-04-26 20:47:38.268934] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.268939] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.268945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.268954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.268964] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.948 [2024-04-26 20:47:38.269088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.948 [2024-04-26 20:47:38.269095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.948 [2024-04-26 20:47:38.269100] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.269104] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:30:19.948 [2024-04-26 20:47:38.269109] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:30:19.948 [2024-04-26 20:47:38.269116] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.269120] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.269178] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.948 [2024-04-26 20:47:38.269184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.948 [2024-04-26 20:47:38.269188] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.269192] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.948 [2024-04-26 20:47:38.269202] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:19.948 [2024-04-26 20:47:38.269210] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:19.948 [2024-04-26 20:47:38.269218] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:19.948 [2024-04-26 20:47:38.269225] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:19.948 [2024-04-26 20:47:38.269231] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:19.948 [2024-04-26 20:47:38.269237] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:19.948 [2024-04-26 20:47:38.269244] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:19.948 [2024-04-26 20:47:38.269251] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:19.948 [2024-04-26 20:47:38.269273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.269278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.269283] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.269291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.269299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.269304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.269309] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.269316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.948 [2024-04-26 20:47:38.269329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.948 [2024-04-26 20:47:38.269334] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:30:19.948 [2024-04-26 20:47:38.273394] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.948 [2024-04-26 20:47:38.273406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.948 [2024-04-26 20:47:38.273412] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.948 [2024-04-26 20:47:38.273426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.948 [2024-04-26 20:47:38.273434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.948 [2024-04-26 20:47:38.273438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:30:19.948 [2024-04-26 20:47:38.273451] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.273467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.273477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:30:19.948 [2024-04-26 20:47:38.273594] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.948 [2024-04-26 20:47:38.273601] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.948 [2024-04-26 20:47:38.273605] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273609] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:30:19.948 [2024-04-26 20:47:38.273617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.273635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.273644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:30:19.948 [2024-04-26 20:47:38.273760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.948 [2024-04-26 20:47:38.273767] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.948 [2024-04-26 20:47:38.273773] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273777] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:30:19.948 [2024-04-26 20:47:38.273785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273794] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.273803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.273811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:30:19.948 [2024-04-26 20:47:38.273908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.948 [2024-04-26 20:47:38.273914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.948 [2024-04-26 20:47:38.273918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:30:19.948 [2024-04-26 20:47:38.273939] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273949] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.273957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.273966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.273984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.273994] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.273998] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.274008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.274016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.274025] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.274030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.274035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613000001fc0) 00:30:19.948 [2024-04-26 20:47:38.274043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-04-26 20:47:38.274054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:30:19.948 [2024-04-26 20:47:38.274060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:30:19.948 [2024-04-26 20:47:38.274065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:30:19.948 [2024-04-26 20:47:38.274070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:30:19.948 [2024-04-26 20:47:38.274239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.948 [2024-04-26 20:47:38.274247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.948 [2024-04-26 20:47:38.274253] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.948 [2024-04-26 20:47:38.274258] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=8192, cccid=5 00:30:19.949 [2024-04-26 20:47:38.274264] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x613000001fc0): expected_datao=0, payload_size=8192 00:30:19.949 [2024-04-26 20:47:38.274294] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274299] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.949 [2024-04-26 20:47:38.274312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.949 [2024-04-26 20:47:38.274316] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274320] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=512, cccid=4 00:30:19.949 [2024-04-26 20:47:38.274325] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=512 00:30:19.949 [2024-04-26 20:47:38.274333] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274337] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.949 [2024-04-26 20:47:38.274353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.949 [2024-04-26 20:47:38.274356] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274361] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=512, cccid=6 00:30:19.949 [2024-04-26 20:47:38.274365] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x613000001fc0): expected_datao=0, payload_size=512 00:30:19.949 [2024-04-26 20:47:38.274373] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274376] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.949 [2024-04-26 20:47:38.274397] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.949 [2024-04-26 20:47:38.274401] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274405] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=7 00:30:19.949 [2024-04-26 20:47:38.274410] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:30:19.949 [2024-04-26 20:47:38.274417] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274421] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.949 [2024-04-26 20:47:38.274436] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.949 [2024-04-26 20:47:38.274440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274445] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:30:19.949 [2024-04-26 20:47:38.274463] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.949 [2024-04-26 20:47:38.274469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.949 [2024-04-26 20:47:38.274473] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274477] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:30:19.949 [2024-04-26 20:47:38.274489] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.949 [2024-04-26 20:47:38.274496] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.949 [2024-04-26 20:47:38.274500] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274504] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x613000001fc0 00:30:19.949 [2024-04-26 20:47:38.274513] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.949 [2024-04-26 20:47:38.274520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.949 [2024-04-26 20:47:38.274523] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.949 [2024-04-26 20:47:38.274529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x613000001fc0 00:30:19.949 ===================================================== 00:30:19.949 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.949 ===================================================== 00:30:19.949 Controller Capabilities/Features 00:30:19.949 ================================ 00:30:19.949 Vendor ID: 8086 00:30:19.949 Subsystem Vendor ID: 8086 00:30:19.949 Serial Number: SPDK00000000000001 00:30:19.949 Model Number: SPDK bdev Controller 00:30:19.949 Firmware Version: 24.01.1 00:30:19.949 Recommended Arb Burst: 6 00:30:19.949 IEEE OUI Identifier: e4 d2 5c 00:30:19.949 Multi-path I/O 00:30:19.949 May have multiple subsystem ports: Yes 00:30:19.949 May have multiple controllers: Yes 00:30:19.949 Associated with SR-IOV VF: No 00:30:19.949 Max Data Transfer Size: 131072 00:30:19.949 Max Number of Namespaces: 32 00:30:19.949 Max Number of I/O Queues: 127 00:30:19.949 NVMe Specification Version (VS): 1.3 00:30:19.949 NVMe Specification Version (Identify): 1.3 00:30:19.949 Maximum Queue Entries: 128 00:30:19.949 Contiguous Queues Required: Yes 00:30:19.949 Arbitration Mechanisms Supported 00:30:19.949 Weighted Round Robin: Not Supported 00:30:19.949 Vendor Specific: Not Supported 00:30:19.949 Reset Timeout: 15000 ms 00:30:19.949 Doorbell Stride: 4 bytes 00:30:19.949 NVM Subsystem Reset: Not Supported 00:30:19.949 Command Sets Supported 00:30:19.949 NVM Command Set: Supported 00:30:19.949 Boot Partition: Not Supported 00:30:19.949 Memory Page Size Minimum: 4096 bytes 00:30:19.949 Memory Page Size Maximum: 4096 bytes 00:30:19.949 Persistent Memory Region: Not Supported 00:30:19.949 Optional Asynchronous Events Supported 00:30:19.949 Namespace Attribute Notices: Supported 00:30:19.949 Firmware Activation Notices: Not Supported 00:30:19.949 ANA Change Notices: Not Supported 00:30:19.949 PLE Aggregate Log Change Notices: Not Supported 00:30:19.949 LBA Status Info Alert Notices: Not Supported 00:30:19.949 EGE Aggregate Log Change Notices: Not Supported 00:30:19.949 Normal NVM Subsystem Shutdown event: Not Supported 00:30:19.949 Zone Descriptor Change Notices: Not Supported 00:30:19.949 Discovery Log Change Notices: Not Supported 00:30:19.949 Controller Attributes 00:30:19.949 128-bit Host Identifier: Supported 00:30:19.949 Non-Operational Permissive Mode: Not Supported 00:30:19.949 NVM Sets: Not Supported 00:30:19.949 Read Recovery Levels: Not Supported 00:30:19.949 Endurance Groups: Not Supported 00:30:19.949 Predictable Latency Mode: Not Supported 00:30:19.949 Traffic Based Keep ALive: Not Supported 00:30:19.949 Namespace Granularity: Not Supported 00:30:19.949 SQ Associations: Not Supported 00:30:19.949 UUID List: Not Supported 00:30:19.949 Multi-Domain Subsystem: Not Supported 00:30:19.949 Fixed Capacity Management: Not Supported 00:30:19.949 Variable Capacity Management: Not Supported 00:30:19.949 Delete Endurance Group: Not Supported 00:30:19.949 Delete NVM Set: Not Supported 00:30:19.949 Extended LBA Formats Supported: Not Supported 00:30:19.949 Flexible Data Placement Supported: Not Supported 00:30:19.949 00:30:19.949 Controller Memory Buffer Support 00:30:19.949 ================================ 00:30:19.949 Supported: No 00:30:19.949 00:30:19.949 Persistent Memory Region Support 00:30:19.949 ================================ 00:30:19.949 Supported: No 00:30:19.949 00:30:19.949 Admin Command Set Attributes 00:30:19.949 ============================ 00:30:19.949 Security Send/Receive: Not Supported 00:30:19.949 Format NVM: Not Supported 00:30:19.949 Firmware Activate/Download: Not Supported 00:30:19.949 Namespace Management: Not Supported 00:30:19.949 Device Self-Test: Not Supported 00:30:19.949 Directives: Not Supported 00:30:19.949 NVMe-MI: Not Supported 00:30:19.949 Virtualization Management: Not Supported 00:30:19.949 Doorbell Buffer Config: Not Supported 00:30:19.949 Get LBA Status Capability: Not Supported 00:30:19.949 Command & Feature Lockdown Capability: Not Supported 00:30:19.949 Abort Command Limit: 4 00:30:19.949 Async Event Request Limit: 4 00:30:19.949 Number of Firmware Slots: N/A 00:30:19.949 Firmware Slot 1 Read-Only: N/A 00:30:19.949 Firmware Activation Without Reset: N/A 00:30:19.949 Multiple Update Detection Support: N/A 00:30:19.949 Firmware Update Granularity: No Information Provided 00:30:19.949 Per-Namespace SMART Log: No 00:30:19.949 Asymmetric Namespace Access Log Page: Not Supported 00:30:19.949 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:19.949 Command Effects Log Page: Supported 00:30:19.949 Get Log Page Extended Data: Supported 00:30:19.949 Telemetry Log Pages: Not Supported 00:30:19.949 Persistent Event Log Pages: Not Supported 00:30:19.949 Supported Log Pages Log Page: May Support 00:30:19.949 Commands Supported & Effects Log Page: Not Supported 00:30:19.949 Feature Identifiers & Effects Log Page:May Support 00:30:19.949 NVMe-MI Commands & Effects Log Page: May Support 00:30:19.949 Data Area 4 for Telemetry Log: Not Supported 00:30:19.949 Error Log Page Entries Supported: 128 00:30:19.949 Keep Alive: Supported 00:30:19.949 Keep Alive Granularity: 10000 ms 00:30:19.949 00:30:19.949 NVM Command Set Attributes 00:30:19.949 ========================== 00:30:19.949 Submission Queue Entry Size 00:30:19.949 Max: 64 00:30:19.949 Min: 64 00:30:19.949 Completion Queue Entry Size 00:30:19.949 Max: 16 00:30:19.949 Min: 16 00:30:19.949 Number of Namespaces: 32 00:30:19.949 Compare Command: Supported 00:30:19.949 Write Uncorrectable Command: Not Supported 00:30:19.949 Dataset Management Command: Supported 00:30:19.949 Write Zeroes Command: Supported 00:30:19.949 Set Features Save Field: Not Supported 00:30:19.949 Reservations: Supported 00:30:19.949 Timestamp: Not Supported 00:30:19.949 Copy: Supported 00:30:19.949 Volatile Write Cache: Present 00:30:19.949 Atomic Write Unit (Normal): 1 00:30:19.949 Atomic Write Unit (PFail): 1 00:30:19.949 Atomic Compare & Write Unit: 1 00:30:19.949 Fused Compare & Write: Supported 00:30:19.949 Scatter-Gather List 00:30:19.949 SGL Command Set: Supported 00:30:19.949 SGL Keyed: Supported 00:30:19.949 SGL Bit Bucket Descriptor: Not Supported 00:30:19.950 SGL Metadata Pointer: Not Supported 00:30:19.950 Oversized SGL: Not Supported 00:30:19.950 SGL Metadata Address: Not Supported 00:30:19.950 SGL Offset: Supported 00:30:19.950 Transport SGL Data Block: Not Supported 00:30:19.950 Replay Protected Memory Block: Not Supported 00:30:19.950 00:30:19.950 Firmware Slot Information 00:30:19.950 ========================= 00:30:19.950 Active slot: 1 00:30:19.950 Slot 1 Firmware Revision: 24.01.1 00:30:19.950 00:30:19.950 00:30:19.950 Commands Supported and Effects 00:30:19.950 ============================== 00:30:19.950 Admin Commands 00:30:19.950 -------------- 00:30:19.950 Get Log Page (02h): Supported 00:30:19.950 Identify (06h): Supported 00:30:19.950 Abort (08h): Supported 00:30:19.950 Set Features (09h): Supported 00:30:19.950 Get Features (0Ah): Supported 00:30:19.950 Asynchronous Event Request (0Ch): Supported 00:30:19.950 Keep Alive (18h): Supported 00:30:19.950 I/O Commands 00:30:19.950 ------------ 00:30:19.950 Flush (00h): Supported LBA-Change 00:30:19.950 Write (01h): Supported LBA-Change 00:30:19.950 Read (02h): Supported 00:30:19.950 Compare (05h): Supported 00:30:19.950 Write Zeroes (08h): Supported LBA-Change 00:30:19.950 Dataset Management (09h): Supported LBA-Change 00:30:19.950 Copy (19h): Supported LBA-Change 00:30:19.950 Unknown (79h): Supported LBA-Change 00:30:19.950 Unknown (7Ah): Supported 00:30:19.950 00:30:19.950 Error Log 00:30:19.950 ========= 00:30:19.950 00:30:19.950 Arbitration 00:30:19.950 =========== 00:30:19.950 Arbitration Burst: 1 00:30:19.950 00:30:19.950 Power Management 00:30:19.950 ================ 00:30:19.950 Number of Power States: 1 00:30:19.950 Current Power State: Power State #0 00:30:19.950 Power State #0: 00:30:19.950 Max Power: 0.00 W 00:30:19.950 Non-Operational State: Operational 00:30:19.950 Entry Latency: Not Reported 00:30:19.950 Exit Latency: Not Reported 00:30:19.950 Relative Read Throughput: 0 00:30:19.950 Relative Read Latency: 0 00:30:19.950 Relative Write Throughput: 0 00:30:19.950 Relative Write Latency: 0 00:30:19.950 Idle Power: Not Reported 00:30:19.950 Active Power: Not Reported 00:30:19.950 Non-Operational Permissive Mode: Not Supported 00:30:19.950 00:30:19.950 Health Information 00:30:19.950 ================== 00:30:19.950 Critical Warnings: 00:30:19.950 Available Spare Space: OK 00:30:19.950 Temperature: OK 00:30:19.950 Device Reliability: OK 00:30:19.950 Read Only: No 00:30:19.950 Volatile Memory Backup: OK 00:30:19.950 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:19.950 Temperature Threshold: [2024-04-26 20:47:38.274657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.274662] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.274667] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613000001fc0) 00:30:19.950 [2024-04-26 20:47:38.274676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.950 [2024-04-26 20:47:38.274686] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:30:19.950 [2024-04-26 20:47:38.274806] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.950 [2024-04-26 20:47:38.274813] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.950 [2024-04-26 20:47:38.274817] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.274822] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x613000001fc0 00:30:19.950 [2024-04-26 20:47:38.274860] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:19.950 [2024-04-26 20:47:38.274871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.950 [2024-04-26 20:47:38.274880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.950 [2024-04-26 20:47:38.274886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.950 [2024-04-26 20:47:38.274893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.950 [2024-04-26 20:47:38.274901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.274906] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.274911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.950 [2024-04-26 20:47:38.274922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.950 [2024-04-26 20:47:38.274934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.950 [2024-04-26 20:47:38.275049] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.950 [2024-04-26 20:47:38.275056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.950 [2024-04-26 20:47:38.275060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.950 [2024-04-26 20:47:38.275074] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275086] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.950 [2024-04-26 20:47:38.275095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.950 [2024-04-26 20:47:38.275109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.950 [2024-04-26 20:47:38.275250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.950 [2024-04-26 20:47:38.275257] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.950 [2024-04-26 20:47:38.275261] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.950 [2024-04-26 20:47:38.275272] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:19.950 [2024-04-26 20:47:38.275278] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:19.950 [2024-04-26 20:47:38.275288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.950 [2024-04-26 20:47:38.275306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.950 [2024-04-26 20:47:38.275316] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.950 [2024-04-26 20:47:38.275432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.950 [2024-04-26 20:47:38.275439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.950 [2024-04-26 20:47:38.275443] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275447] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.950 [2024-04-26 20:47:38.275457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275465] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.950 [2024-04-26 20:47:38.275476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.950 [2024-04-26 20:47:38.275486] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.950 [2024-04-26 20:47:38.275598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.950 [2024-04-26 20:47:38.275604] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.950 [2024-04-26 20:47:38.275608] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275612] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.950 [2024-04-26 20:47:38.275623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275627] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275631] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.950 [2024-04-26 20:47:38.275639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.950 [2024-04-26 20:47:38.275649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.950 [2024-04-26 20:47:38.275770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.950 [2024-04-26 20:47:38.275778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.950 [2024-04-26 20:47:38.275782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.950 [2024-04-26 20:47:38.275796] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.950 [2024-04-26 20:47:38.275813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.950 [2024-04-26 20:47:38.275822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.950 [2024-04-26 20:47:38.275943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.950 [2024-04-26 20:47:38.275949] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.950 [2024-04-26 20:47:38.275953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.950 [2024-04-26 20:47:38.275966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.950 [2024-04-26 20:47:38.275976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.950 [2024-04-26 20:47:38.275987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.950 [2024-04-26 20:47:38.275997] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.950 [2024-04-26 20:47:38.276116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.950 [2024-04-26 20:47:38.276123] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.951 [2024-04-26 20:47:38.276127] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276131] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.951 [2024-04-26 20:47:38.276141] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.951 [2024-04-26 20:47:38.276158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.951 [2024-04-26 20:47:38.276167] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.951 [2024-04-26 20:47:38.276289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.951 [2024-04-26 20:47:38.276296] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.951 [2024-04-26 20:47:38.276300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276305] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.951 [2024-04-26 20:47:38.276315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276319] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276323] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.951 [2024-04-26 20:47:38.276330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.951 [2024-04-26 20:47:38.276340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.951 [2024-04-26 20:47:38.276466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.951 [2024-04-26 20:47:38.276472] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.951 [2024-04-26 20:47:38.276476] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276480] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.951 [2024-04-26 20:47:38.276489] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276498] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.951 [2024-04-26 20:47:38.276509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.951 [2024-04-26 20:47:38.276519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.951 [2024-04-26 20:47:38.276639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.951 [2024-04-26 20:47:38.276646] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.951 [2024-04-26 20:47:38.276649] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.951 [2024-04-26 20:47:38.276663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276667] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276672] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.951 [2024-04-26 20:47:38.276679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.951 [2024-04-26 20:47:38.276691] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.951 [2024-04-26 20:47:38.276806] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.951 [2024-04-26 20:47:38.276812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.951 [2024-04-26 20:47:38.276816] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276820] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.951 [2024-04-26 20:47:38.276831] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276842] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.951 [2024-04-26 20:47:38.276850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.951 [2024-04-26 20:47:38.276858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.951 [2024-04-26 20:47:38.276974] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.951 [2024-04-26 20:47:38.276980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.951 [2024-04-26 20:47:38.276985] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.276989] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.951 [2024-04-26 20:47:38.276998] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.277002] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.277007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.951 [2024-04-26 20:47:38.277014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.951 [2024-04-26 20:47:38.277024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.951 [2024-04-26 20:47:38.277144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.951 [2024-04-26 20:47:38.277150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.951 [2024-04-26 20:47:38.277154] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.277158] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.951 [2024-04-26 20:47:38.277167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.277171] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.277175] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.951 [2024-04-26 20:47:38.277184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.951 [2024-04-26 20:47:38.277193] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:19.951 [2024-04-26 20:47:38.277313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.951 [2024-04-26 20:47:38.277320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.951 [2024-04-26 20:47:38.277323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.277328] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:19.951 [2024-04-26 20:47:38.277337] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.277341] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.951 [2024-04-26 20:47:38.277345] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:19.951 [2024-04-26 20:47:38.277353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.951 [2024-04-26 20:47:38.277363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:20.211 [2024-04-26 20:47:38.280365] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:20.211 [2024-04-26 20:47:38.280376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:20.211 [2024-04-26 20:47:38.280385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:20.211 [2024-04-26 20:47:38.280390] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:20.211 [2024-04-26 20:47:38.280401] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:20.211 [2024-04-26 20:47:38.280405] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:20.211 [2024-04-26 20:47:38.280409] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:30:20.211 [2024-04-26 20:47:38.280417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.211 [2024-04-26 20:47:38.280431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:30:20.211 [2024-04-26 20:47:38.280542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:20.211 [2024-04-26 20:47:38.280548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:20.211 [2024-04-26 20:47:38.280552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:20.211 [2024-04-26 20:47:38.280556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:30:20.211 [2024-04-26 20:47:38.280564] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:30:20.211 0 Kelvin (-273 Celsius) 00:30:20.211 Available Spare: 0% 00:30:20.211 Available Spare Threshold: 0% 00:30:20.211 Life Percentage Used: 0% 00:30:20.211 Data Units Read: 0 00:30:20.211 Data Units Written: 0 00:30:20.211 Host Read Commands: 0 00:30:20.211 Host Write Commands: 0 00:30:20.211 Controller Busy Time: 0 minutes 00:30:20.211 Power Cycles: 0 00:30:20.211 Power On Hours: 0 hours 00:30:20.211 Unsafe Shutdowns: 0 00:30:20.211 Unrecoverable Media Errors: 0 00:30:20.211 Lifetime Error Log Entries: 0 00:30:20.211 Warning Temperature Time: 0 minutes 00:30:20.211 Critical Temperature Time: 0 minutes 00:30:20.211 00:30:20.211 Number of Queues 00:30:20.211 ================ 00:30:20.211 Number of I/O Submission Queues: 127 00:30:20.211 Number of I/O Completion Queues: 127 00:30:20.211 00:30:20.211 Active Namespaces 00:30:20.211 ================= 00:30:20.211 Namespace ID:1 00:30:20.211 Error Recovery Timeout: Unlimited 00:30:20.211 Command Set Identifier: NVM (00h) 00:30:20.211 Deallocate: Supported 00:30:20.211 Deallocated/Unwritten Error: Not Supported 00:30:20.211 Deallocated Read Value: Unknown 00:30:20.211 Deallocate in Write Zeroes: Not Supported 00:30:20.211 Deallocated Guard Field: 0xFFFF 00:30:20.211 Flush: Supported 00:30:20.211 Reservation: Supported 00:30:20.211 Namespace Sharing Capabilities: Multiple Controllers 00:30:20.211 Size (in LBAs): 131072 (0GiB) 00:30:20.211 Capacity (in LBAs): 131072 (0GiB) 00:30:20.211 Utilization (in LBAs): 131072 (0GiB) 00:30:20.211 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:20.211 EUI64: ABCDEF0123456789 00:30:20.211 UUID: 8c164a9c-2a08-4233-940b-afeaea4e4701 00:30:20.211 Thin Provisioning: Not Supported 00:30:20.211 Per-NS Atomic Units: Yes 00:30:20.211 Atomic Boundary Size (Normal): 0 00:30:20.211 Atomic Boundary Size (PFail): 0 00:30:20.211 Atomic Boundary Offset: 0 00:30:20.211 Maximum Single Source Range Length: 65535 00:30:20.211 Maximum Copy Length: 65535 00:30:20.211 Maximum Source Range Count: 1 00:30:20.211 NGUID/EUI64 Never Reused: No 00:30:20.211 Namespace Write Protected: No 00:30:20.211 Number of LBA Formats: 1 00:30:20.211 Current LBA Format: LBA Format #00 00:30:20.211 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:20.211 00:30:20.211 20:47:38 -- host/identify.sh@51 -- # sync 00:30:20.211 20:47:38 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:20.211 20:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.211 20:47:38 -- common/autotest_common.sh@10 -- # set +x 00:30:20.211 20:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.211 20:47:38 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:20.211 20:47:38 -- host/identify.sh@56 -- # nvmftestfini 00:30:20.211 20:47:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:20.211 20:47:38 -- nvmf/common.sh@116 -- # sync 00:30:20.211 20:47:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:20.211 20:47:38 -- nvmf/common.sh@119 -- # set +e 00:30:20.211 20:47:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:20.211 20:47:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:20.211 rmmod nvme_tcp 00:30:20.211 rmmod nvme_fabrics 00:30:20.211 rmmod nvme_keyring 00:30:20.211 20:47:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:20.211 20:47:38 -- nvmf/common.sh@123 -- # set -e 00:30:20.211 20:47:38 -- nvmf/common.sh@124 -- # return 0 00:30:20.211 20:47:38 -- nvmf/common.sh@477 -- # '[' -n 3707925 ']' 00:30:20.211 20:47:38 -- nvmf/common.sh@478 -- # killprocess 3707925 00:30:20.211 20:47:38 -- common/autotest_common.sh@926 -- # '[' -z 3707925 ']' 00:30:20.211 20:47:38 -- common/autotest_common.sh@930 -- # kill -0 3707925 00:30:20.211 20:47:38 -- common/autotest_common.sh@931 -- # uname 00:30:20.211 20:47:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:20.211 20:47:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3707925 00:30:20.211 20:47:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:20.211 20:47:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:20.211 20:47:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3707925' 00:30:20.211 killing process with pid 3707925 00:30:20.211 20:47:38 -- common/autotest_common.sh@945 -- # kill 3707925 00:30:20.211 [2024-04-26 20:47:38.416208] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:20.211 20:47:38 -- common/autotest_common.sh@950 -- # wait 3707925 00:30:20.781 20:47:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:20.781 20:47:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:20.781 20:47:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:20.781 20:47:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:20.781 20:47:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:20.781 20:47:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.781 20:47:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:20.782 20:47:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.683 20:47:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:22.683 00:30:22.683 real 0m9.519s 00:30:22.683 user 0m7.654s 00:30:22.683 sys 0m4.474s 00:30:22.683 20:47:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:22.683 20:47:40 -- common/autotest_common.sh@10 -- # set +x 00:30:22.683 ************************************ 00:30:22.683 END TEST nvmf_identify 00:30:22.683 ************************************ 00:30:22.942 20:47:41 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:22.942 20:47:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:22.942 20:47:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:22.942 20:47:41 -- common/autotest_common.sh@10 -- # set +x 00:30:22.942 ************************************ 00:30:22.942 START TEST nvmf_perf 00:30:22.942 ************************************ 00:30:22.942 20:47:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:22.942 * Looking for test storage... 00:30:22.942 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:22.942 20:47:41 -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.942 20:47:41 -- nvmf/common.sh@7 -- # uname -s 00:30:22.942 20:47:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.942 20:47:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.942 20:47:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.942 20:47:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.942 20:47:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.942 20:47:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.942 20:47:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.942 20:47:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.942 20:47:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.942 20:47:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.942 20:47:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:30:22.942 20:47:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:30:22.942 20:47:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.942 20:47:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.942 20:47:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:22.942 20:47:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:22.942 20:47:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.942 20:47:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.942 20:47:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.942 20:47:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.943 20:47:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.943 20:47:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.943 20:47:41 -- paths/export.sh@5 -- # export PATH 00:30:22.943 20:47:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.943 20:47:41 -- nvmf/common.sh@46 -- # : 0 00:30:22.943 20:47:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:22.943 20:47:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:22.943 20:47:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:22.943 20:47:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.943 20:47:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.943 20:47:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:22.943 20:47:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:22.943 20:47:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:22.943 20:47:41 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:22.943 20:47:41 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:22.943 20:47:41 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:30:22.943 20:47:41 -- host/perf.sh@17 -- # nvmftestinit 00:30:22.943 20:47:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:22.943 20:47:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.943 20:47:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:22.943 20:47:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:22.943 20:47:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:22.943 20:47:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.943 20:47:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.943 20:47:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.943 20:47:41 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:30:22.943 20:47:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:22.943 20:47:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:22.943 20:47:41 -- common/autotest_common.sh@10 -- # set +x 00:30:28.234 20:47:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:28.234 20:47:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:28.234 20:47:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:28.234 20:47:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:28.234 20:47:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:28.234 20:47:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:28.234 20:47:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:28.234 20:47:46 -- nvmf/common.sh@294 -- # net_devs=() 00:30:28.234 20:47:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:28.234 20:47:46 -- nvmf/common.sh@295 -- # e810=() 00:30:28.234 20:47:46 -- nvmf/common.sh@295 -- # local -ga e810 00:30:28.234 20:47:46 -- nvmf/common.sh@296 -- # x722=() 00:30:28.234 20:47:46 -- nvmf/common.sh@296 -- # local -ga x722 00:30:28.234 20:47:46 -- nvmf/common.sh@297 -- # mlx=() 00:30:28.234 20:47:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:28.234 20:47:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.234 20:47:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:28.234 20:47:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:28.234 20:47:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:28.234 20:47:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:28.234 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:28.234 20:47:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:28.234 20:47:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:28.234 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:28.234 20:47:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:28.234 20:47:46 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:28.234 20:47:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.234 20:47:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:28.234 20:47:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.234 20:47:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:28.234 Found net devices under 0000:27:00.0: cvl_0_0 00:30:28.234 20:47:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.234 20:47:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:28.234 20:47:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.234 20:47:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:28.234 20:47:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.234 20:47:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:28.234 Found net devices under 0000:27:00.1: cvl_0_1 00:30:28.234 20:47:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.234 20:47:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:28.234 20:47:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:28.234 20:47:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:28.234 20:47:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.234 20:47:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.234 20:47:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.234 20:47:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:28.234 20:47:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.234 20:47:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.234 20:47:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:28.234 20:47:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.234 20:47:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.234 20:47:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:28.234 20:47:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:28.234 20:47:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.234 20:47:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.234 20:47:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.234 20:47:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.234 20:47:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:28.234 20:47:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.234 20:47:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.234 20:47:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.234 20:47:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:28.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:30:28.234 00:30:28.234 --- 10.0.0.2 ping statistics --- 00:30:28.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.234 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:30:28.234 20:47:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:30:28.234 00:30:28.234 --- 10.0.0.1 ping statistics --- 00:30:28.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.234 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:30:28.234 20:47:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.234 20:47:46 -- nvmf/common.sh@410 -- # return 0 00:30:28.234 20:47:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:28.234 20:47:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.234 20:47:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:28.234 20:47:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.234 20:47:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:28.234 20:47:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:28.234 20:47:46 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:28.234 20:47:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:28.234 20:47:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:28.235 20:47:46 -- common/autotest_common.sh@10 -- # set +x 00:30:28.235 20:47:46 -- nvmf/common.sh@469 -- # nvmfpid=3712153 00:30:28.235 20:47:46 -- nvmf/common.sh@470 -- # waitforlisten 3712153 00:30:28.235 20:47:46 -- common/autotest_common.sh@819 -- # '[' -z 3712153 ']' 00:30:28.235 20:47:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.235 20:47:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:28.235 20:47:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.235 20:47:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:28.235 20:47:46 -- common/autotest_common.sh@10 -- # set +x 00:30:28.235 20:47:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:28.235 [2024-04-26 20:47:46.344418] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:28.235 [2024-04-26 20:47:46.344522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.235 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.235 [2024-04-26 20:47:46.464489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.235 [2024-04-26 20:47:46.562439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:28.235 [2024-04-26 20:47:46.562621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.235 [2024-04-26 20:47:46.562635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.235 [2024-04-26 20:47:46.562645] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.235 [2024-04-26 20:47:46.562794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.235 [2024-04-26 20:47:46.562901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.235 [2024-04-26 20:47:46.563001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.235 [2024-04-26 20:47:46.563012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.803 20:47:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:28.803 20:47:47 -- common/autotest_common.sh@852 -- # return 0 00:30:28.803 20:47:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:28.803 20:47:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:28.803 20:47:47 -- common/autotest_common.sh@10 -- # set +x 00:30:28.803 20:47:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.803 20:47:47 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:28.803 20:47:47 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:35.375 20:47:52 -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:35.375 20:47:52 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:35.375 20:47:53 -- host/perf.sh@30 -- # local_nvme_trid=0000:c9:00.0 00:30:35.375 20:47:53 -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:35.375 20:47:53 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:35.375 20:47:53 -- host/perf.sh@33 -- # '[' -n 0000:c9:00.0 ']' 00:30:35.375 20:47:53 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:35.375 20:47:53 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:35.375 20:47:53 -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:35.375 [2024-04-26 20:47:53.322107] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.375 20:47:53 -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:35.375 20:47:53 -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:35.375 20:47:53 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:35.375 20:47:53 -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:35.375 20:47:53 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:35.636 20:47:53 -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.636 [2024-04-26 20:47:53.912938] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.636 20:47:53 -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:35.897 20:47:54 -- host/perf.sh@52 -- # '[' -n 0000:c9:00.0 ']' 00:30:35.897 20:47:54 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:30:35.897 20:47:54 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:35.897 20:47:54 -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:30:37.272 Initializing NVMe Controllers 00:30:37.272 Attached to NVMe Controller at 0000:c9:00.0 [8086:0a54] 00:30:37.272 Associating PCIE (0000:c9:00.0) NSID 1 with lcore 0 00:30:37.272 Initialization complete. Launching workers. 00:30:37.272 ======================================================== 00:30:37.272 Latency(us) 00:30:37.272 Device Information : IOPS MiB/s Average min max 00:30:37.272 PCIE (0000:c9:00.0) NSID 1 from core 0: 95546.88 373.23 334.50 14.99 5242.53 00:30:37.272 ======================================================== 00:30:37.272 Total : 95546.88 373.23 334.50 14.99 5242.53 00:30:37.272 00:30:37.272 20:47:55 -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.272 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.179 Initializing NVMe Controllers 00:30:39.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:39.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:39.179 Initialization complete. Launching workers. 00:30:39.179 ======================================================== 00:30:39.179 Latency(us) 00:30:39.179 Device Information : IOPS MiB/s Average min max 00:30:39.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 85.00 0.33 11978.78 138.20 45745.92 00:30:39.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.00 0.22 17823.07 7952.17 48022.80 00:30:39.179 ======================================================== 00:30:39.179 Total : 142.00 0.55 14324.73 138.20 48022.80 00:30:39.179 00:30:39.179 20:47:57 -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:39.179 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.117 Initializing NVMe Controllers 00:30:40.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:40.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:40.117 Initialization complete. Launching workers. 00:30:40.117 ======================================================== 00:30:40.117 Latency(us) 00:30:40.117 Device Information : IOPS MiB/s Average min max 00:30:40.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11314.74 44.20 2829.48 346.30 9060.65 00:30:40.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3913.91 15.29 8230.87 3352.52 16378.63 00:30:40.117 ======================================================== 00:30:40.117 Total : 15228.65 59.49 4217.69 346.30 16378.63 00:30:40.117 00:30:40.117 20:47:58 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:30:40.118 20:47:58 -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:40.472 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.022 Initializing NVMe Controllers 00:30:43.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.022 Controller IO queue size 128, less than required. 00:30:43.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.022 Controller IO queue size 128, less than required. 00:30:43.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:43.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:43.022 Initialization complete. Launching workers. 00:30:43.022 ======================================================== 00:30:43.022 Latency(us) 00:30:43.022 Device Information : IOPS MiB/s Average min max 00:30:43.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1231.46 307.86 106938.36 55680.88 174236.44 00:30:43.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.74 144.94 234307.61 86088.43 351964.43 00:30:43.022 ======================================================== 00:30:43.022 Total : 1811.20 452.80 147707.77 55680.88 351964.43 00:30:43.022 00:30:43.022 20:48:01 -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:43.022 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.022 No valid NVMe controllers or AIO or URING devices found 00:30:43.280 Initializing NVMe Controllers 00:30:43.280 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.280 Controller IO queue size 128, less than required. 00:30:43.280 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.280 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:43.280 Controller IO queue size 128, less than required. 00:30:43.280 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.280 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:43.280 WARNING: Some requested NVMe devices were skipped 00:30:43.280 20:48:01 -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:43.280 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.566 Initializing NVMe Controllers 00:30:46.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.566 Controller IO queue size 128, less than required. 00:30:46.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:46.566 Controller IO queue size 128, less than required. 00:30:46.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:46.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:46.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:46.566 Initialization complete. Launching workers. 00:30:46.566 00:30:46.566 ==================== 00:30:46.566 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:46.566 TCP transport: 00:30:46.566 polls: 32786 00:30:46.566 idle_polls: 9564 00:30:46.566 sock_completions: 23222 00:30:46.566 nvme_completions: 4598 00:30:46.566 submitted_requests: 7036 00:30:46.566 queued_requests: 1 00:30:46.566 00:30:46.566 ==================== 00:30:46.566 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:46.566 TCP transport: 00:30:46.566 polls: 36711 00:30:46.566 idle_polls: 13085 00:30:46.566 sock_completions: 23626 00:30:46.566 nvme_completions: 7272 00:30:46.566 submitted_requests: 11004 00:30:46.566 queued_requests: 1 00:30:46.566 ======================================================== 00:30:46.566 Latency(us) 00:30:46.566 Device Information : IOPS MiB/s Average min max 00:30:46.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1212.99 303.25 109908.62 65865.50 200880.45 00:30:46.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1881.49 470.37 68156.02 41977.67 127428.63 00:30:46.566 ======================================================== 00:30:46.566 Total : 3094.48 773.62 84522.45 41977.67 200880.45 00:30:46.566 00:30:46.566 20:48:04 -- host/perf.sh@66 -- # sync 00:30:46.566 20:48:04 -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.566 20:48:04 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:46.566 20:48:04 -- host/perf.sh@71 -- # '[' -n 0000:c9:00.0 ']' 00:30:46.566 20:48:04 -- host/perf.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:53.135 20:48:10 -- host/perf.sh@72 -- # ls_guid=06404923-2077-4558-9c3f-95ce9fe749ee 00:30:53.135 20:48:10 -- host/perf.sh@73 -- # get_lvs_free_mb 06404923-2077-4558-9c3f-95ce9fe749ee 00:30:53.135 20:48:10 -- common/autotest_common.sh@1343 -- # local lvs_uuid=06404923-2077-4558-9c3f-95ce9fe749ee 00:30:53.135 20:48:10 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:53.135 20:48:10 -- common/autotest_common.sh@1345 -- # local fc 00:30:53.135 20:48:10 -- common/autotest_common.sh@1346 -- # local cs 00:30:53.135 20:48:10 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:53.135 20:48:10 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:53.135 { 00:30:53.135 "uuid": "06404923-2077-4558-9c3f-95ce9fe749ee", 00:30:53.135 "name": "lvs_0", 00:30:53.135 "base_bdev": "Nvme0n1", 00:30:53.135 "total_data_clusters": 476466, 00:30:53.135 "free_clusters": 476466, 00:30:53.135 "block_size": 512, 00:30:53.135 "cluster_size": 4194304 00:30:53.135 } 00:30:53.135 ]' 00:30:53.135 20:48:10 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="06404923-2077-4558-9c3f-95ce9fe749ee") .free_clusters' 00:30:53.135 20:48:10 -- common/autotest_common.sh@1348 -- # fc=476466 00:30:53.135 20:48:10 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="06404923-2077-4558-9c3f-95ce9fe749ee") .cluster_size' 00:30:53.135 20:48:10 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:53.135 20:48:10 -- common/autotest_common.sh@1352 -- # free_mb=1905864 00:30:53.135 20:48:10 -- common/autotest_common.sh@1353 -- # echo 1905864 00:30:53.135 1905864 00:30:53.135 20:48:10 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:30:53.135 20:48:10 -- host/perf.sh@78 -- # free_mb=20480 00:30:53.135 20:48:10 -- host/perf.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 06404923-2077-4558-9c3f-95ce9fe749ee lbd_0 20480 00:30:53.135 20:48:11 -- host/perf.sh@80 -- # lb_guid=62a20815-eeb4-4720-b15d-61ae5814adf3 00:30:53.135 20:48:11 -- host/perf.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 62a20815-eeb4-4720-b15d-61ae5814adf3 lvs_n_0 00:30:55.041 20:48:13 -- host/perf.sh@83 -- # ls_nested_guid=7ce09cad-de46-450a-9684-7dbb7408e29b 00:30:55.041 20:48:13 -- host/perf.sh@84 -- # get_lvs_free_mb 7ce09cad-de46-450a-9684-7dbb7408e29b 00:30:55.041 20:48:13 -- common/autotest_common.sh@1343 -- # local lvs_uuid=7ce09cad-de46-450a-9684-7dbb7408e29b 00:30:55.041 20:48:13 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:55.041 20:48:13 -- common/autotest_common.sh@1345 -- # local fc 00:30:55.041 20:48:13 -- common/autotest_common.sh@1346 -- # local cs 00:30:55.041 20:48:13 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:55.041 20:48:13 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:55.041 { 00:30:55.041 "uuid": "06404923-2077-4558-9c3f-95ce9fe749ee", 00:30:55.041 "name": "lvs_0", 00:30:55.041 "base_bdev": "Nvme0n1", 00:30:55.041 "total_data_clusters": 476466, 00:30:55.041 "free_clusters": 471346, 00:30:55.041 "block_size": 512, 00:30:55.041 "cluster_size": 4194304 00:30:55.041 }, 00:30:55.041 { 00:30:55.041 "uuid": "7ce09cad-de46-450a-9684-7dbb7408e29b", 00:30:55.041 "name": "lvs_n_0", 00:30:55.041 "base_bdev": "62a20815-eeb4-4720-b15d-61ae5814adf3", 00:30:55.041 "total_data_clusters": 5114, 00:30:55.041 "free_clusters": 5114, 00:30:55.041 "block_size": 512, 00:30:55.041 "cluster_size": 4194304 00:30:55.041 } 00:30:55.041 ]' 00:30:55.041 20:48:13 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="7ce09cad-de46-450a-9684-7dbb7408e29b") .free_clusters' 00:30:55.041 20:48:13 -- common/autotest_common.sh@1348 -- # fc=5114 00:30:55.041 20:48:13 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="7ce09cad-de46-450a-9684-7dbb7408e29b") .cluster_size' 00:30:55.041 20:48:13 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:55.041 20:48:13 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:30:55.041 20:48:13 -- common/autotest_common.sh@1353 -- # echo 20456 00:30:55.041 20456 00:30:55.041 20:48:13 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:55.041 20:48:13 -- host/perf.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7ce09cad-de46-450a-9684-7dbb7408e29b lbd_nest_0 20456 00:30:55.301 20:48:13 -- host/perf.sh@88 -- # lb_nested_guid=18f358ff-df4f-41c3-9a1b-1050759d4bbe 00:30:55.301 20:48:13 -- host/perf.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:55.301 20:48:13 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:55.301 20:48:13 -- host/perf.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 18f358ff-df4f-41c3-9a1b-1050759d4bbe 00:30:55.562 20:48:13 -- host/perf.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.820 20:48:13 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:55.820 20:48:13 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:55.820 20:48:13 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:55.820 20:48:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:55.820 20:48:13 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:55.820 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.028 Initializing NVMe Controllers 00:31:08.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.028 Initialization complete. Launching workers. 00:31:08.028 ======================================================== 00:31:08.028 Latency(us) 00:31:08.028 Device Information : IOPS MiB/s Average min max 00:31:08.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.48 0.02 23076.36 228.43 49322.54 00:31:08.028 ======================================================== 00:31:08.028 Total : 43.48 0.02 23076.36 228.43 49322.54 00:31:08.028 00:31:08.028 20:48:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:08.028 20:48:24 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.028 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.015 Initializing NVMe Controllers 00:31:18.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:18.015 Initialization complete. Launching workers. 00:31:18.015 ======================================================== 00:31:18.015 Latency(us) 00:31:18.015 Device Information : IOPS MiB/s Average min max 00:31:18.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.40 9.43 13270.99 6265.03 48005.89 00:31:18.015 ======================================================== 00:31:18.015 Total : 75.40 9.43 13270.99 6265.03 48005.89 00:31:18.015 00:31:18.015 20:48:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:18.015 20:48:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:18.015 20:48:34 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.015 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.000 Initializing NVMe Controllers 00:31:28.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.000 Initialization complete. Launching workers. 00:31:28.000 ======================================================== 00:31:28.000 Latency(us) 00:31:28.000 Device Information : IOPS MiB/s Average min max 00:31:28.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9079.96 4.43 3530.19 230.65 42316.80 00:31:28.000 ======================================================== 00:31:28.000 Total : 9079.96 4.43 3530.19 230.65 42316.80 00:31:28.000 00:31:28.000 20:48:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:28.000 20:48:45 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:28.000 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.977 Initializing NVMe Controllers 00:31:37.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.977 Initialization complete. Launching workers. 00:31:37.977 ======================================================== 00:31:37.977 Latency(us) 00:31:37.977 Device Information : IOPS MiB/s Average min max 00:31:37.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2520.61 315.08 12700.30 953.76 30759.58 00:31:37.977 ======================================================== 00:31:37.977 Total : 2520.61 315.08 12700.30 953.76 30759.58 00:31:37.977 00:31:37.977 20:48:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:37.977 20:48:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:37.977 20:48:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:37.977 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.984 Initializing NVMe Controllers 00:31:47.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:47.984 Controller IO queue size 128, less than required. 00:31:47.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:47.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:47.984 Initialization complete. Launching workers. 00:31:47.984 ======================================================== 00:31:47.984 Latency(us) 00:31:47.984 Device Information : IOPS MiB/s Average min max 00:31:47.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15858.40 7.74 8071.39 1412.93 22660.21 00:31:47.984 ======================================================== 00:31:47.984 Total : 15858.40 7.74 8071.39 1412.93 22660.21 00:31:47.984 00:31:47.984 20:49:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:47.984 20:49:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:47.984 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.288 Initializing NVMe Controllers 00:32:00.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:00.288 Controller IO queue size 128, less than required. 00:32:00.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:00.288 Initialization complete. Launching workers. 00:32:00.288 ======================================================== 00:32:00.288 Latency(us) 00:32:00.288 Device Information : IOPS MiB/s Average min max 00:32:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1208.36 151.04 106869.21 15450.96 215226.67 00:32:00.288 ======================================================== 00:32:00.288 Total : 1208.36 151.04 106869.21 15450.96 215226.67 00:32:00.288 00:32:00.288 20:49:16 -- host/perf.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:00.288 20:49:16 -- host/perf.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 18f358ff-df4f-41c3-9a1b-1050759d4bbe 00:32:00.288 20:49:17 -- host/perf.sh@106 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:00.288 20:49:17 -- host/perf.sh@107 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 62a20815-eeb4-4720-b15d-61ae5814adf3 00:32:00.288 20:49:17 -- host/perf.sh@108 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:00.288 20:49:17 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:00.288 20:49:17 -- host/perf.sh@114 -- # nvmftestfini 00:32:00.288 20:49:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:00.288 20:49:17 -- nvmf/common.sh@116 -- # sync 00:32:00.288 20:49:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:00.288 20:49:17 -- nvmf/common.sh@119 -- # set +e 00:32:00.288 20:49:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:00.288 20:49:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:00.288 rmmod nvme_tcp 00:32:00.288 rmmod nvme_fabrics 00:32:00.288 rmmod nvme_keyring 00:32:00.288 20:49:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:00.288 20:49:17 -- nvmf/common.sh@123 -- # set -e 00:32:00.288 20:49:17 -- nvmf/common.sh@124 -- # return 0 00:32:00.288 20:49:17 -- nvmf/common.sh@477 -- # '[' -n 3712153 ']' 00:32:00.288 20:49:17 -- nvmf/common.sh@478 -- # killprocess 3712153 00:32:00.288 20:49:17 -- common/autotest_common.sh@926 -- # '[' -z 3712153 ']' 00:32:00.288 20:49:17 -- common/autotest_common.sh@930 -- # kill -0 3712153 00:32:00.288 20:49:17 -- common/autotest_common.sh@931 -- # uname 00:32:00.288 20:49:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:00.288 20:49:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3712153 00:32:00.288 20:49:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:00.288 20:49:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:00.288 20:49:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3712153' 00:32:00.288 killing process with pid 3712153 00:32:00.288 20:49:17 -- common/autotest_common.sh@945 -- # kill 3712153 00:32:00.288 20:49:17 -- common/autotest_common.sh@950 -- # wait 3712153 00:32:02.826 20:49:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:02.826 20:49:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:02.826 20:49:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:02.826 20:49:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:02.826 20:49:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:02.826 20:49:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.826 20:49:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:02.826 20:49:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.731 20:49:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:04.731 00:32:04.731 real 1m41.979s 00:32:04.731 user 6m12.770s 00:32:04.731 sys 0m10.947s 00:32:04.731 20:49:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:04.731 20:49:23 -- common/autotest_common.sh@10 -- # set +x 00:32:04.731 ************************************ 00:32:04.731 END TEST nvmf_perf 00:32:04.731 ************************************ 00:32:04.731 20:49:23 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:04.731 20:49:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:04.731 20:49:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:04.731 20:49:23 -- common/autotest_common.sh@10 -- # set +x 00:32:04.731 ************************************ 00:32:04.731 START TEST nvmf_fio_host 00:32:04.731 ************************************ 00:32:04.731 20:49:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:04.991 * Looking for test storage... 00:32:04.991 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:32:04.991 20:49:23 -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:04.991 20:49:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.991 20:49:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.991 20:49:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.991 20:49:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.991 20:49:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.992 20:49:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.992 20:49:23 -- paths/export.sh@5 -- # export PATH 00:32:04.992 20:49:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.992 20:49:23 -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.992 20:49:23 -- nvmf/common.sh@7 -- # uname -s 00:32:04.992 20:49:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.992 20:49:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.992 20:49:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.992 20:49:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.992 20:49:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.992 20:49:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.992 20:49:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.992 20:49:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.992 20:49:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.992 20:49:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.992 20:49:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:32:04.992 20:49:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:32:04.992 20:49:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.992 20:49:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.992 20:49:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:04.992 20:49:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:04.992 20:49:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.992 20:49:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.992 20:49:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.992 20:49:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.992 20:49:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.992 20:49:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.992 20:49:23 -- paths/export.sh@5 -- # export PATH 00:32:04.992 20:49:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.992 20:49:23 -- nvmf/common.sh@46 -- # : 0 00:32:04.992 20:49:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:04.992 20:49:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:04.992 20:49:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:04.992 20:49:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.992 20:49:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.992 20:49:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:04.992 20:49:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:04.992 20:49:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:04.992 20:49:23 -- host/fio.sh@12 -- # nvmftestinit 00:32:04.992 20:49:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:04.992 20:49:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.992 20:49:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:04.992 20:49:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:04.992 20:49:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:04.992 20:49:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.992 20:49:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.992 20:49:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.992 20:49:23 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:32:04.992 20:49:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:04.992 20:49:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:04.992 20:49:23 -- common/autotest_common.sh@10 -- # set +x 00:32:10.269 20:49:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:10.269 20:49:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:10.269 20:49:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:10.269 20:49:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:10.269 20:49:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:10.269 20:49:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:10.269 20:49:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:10.269 20:49:28 -- nvmf/common.sh@294 -- # net_devs=() 00:32:10.269 20:49:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:10.269 20:49:28 -- nvmf/common.sh@295 -- # e810=() 00:32:10.269 20:49:28 -- nvmf/common.sh@295 -- # local -ga e810 00:32:10.269 20:49:28 -- nvmf/common.sh@296 -- # x722=() 00:32:10.269 20:49:28 -- nvmf/common.sh@296 -- # local -ga x722 00:32:10.269 20:49:28 -- nvmf/common.sh@297 -- # mlx=() 00:32:10.269 20:49:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:10.269 20:49:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.269 20:49:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.269 20:49:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.269 20:49:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.269 20:49:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.269 20:49:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.270 20:49:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.270 20:49:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.270 20:49:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.270 20:49:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.270 20:49:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.270 20:49:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:10.270 20:49:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:10.270 20:49:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:10.270 20:49:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:10.270 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:10.270 20:49:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:10.270 20:49:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:10.270 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:10.270 20:49:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:10.270 20:49:28 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:10.270 20:49:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.270 20:49:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:10.270 20:49:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.270 20:49:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:10.270 Found net devices under 0000:27:00.0: cvl_0_0 00:32:10.270 20:49:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.270 20:49:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:10.270 20:49:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.270 20:49:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:10.270 20:49:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.270 20:49:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:10.270 Found net devices under 0000:27:00.1: cvl_0_1 00:32:10.270 20:49:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.270 20:49:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:10.270 20:49:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:10.270 20:49:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:10.270 20:49:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.270 20:49:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.270 20:49:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.270 20:49:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:10.270 20:49:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.270 20:49:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.270 20:49:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:10.270 20:49:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.270 20:49:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.270 20:49:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:10.270 20:49:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:10.270 20:49:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.270 20:49:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.270 20:49:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.270 20:49:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.270 20:49:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:10.270 20:49:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.270 20:49:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.270 20:49:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.270 20:49:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:10.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:32:10.270 00:32:10.270 --- 10.0.0.2 ping statistics --- 00:32:10.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.270 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:32:10.270 20:49:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:32:10.270 00:32:10.270 --- 10.0.0.1 ping statistics --- 00:32:10.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.270 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:32:10.270 20:49:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.270 20:49:28 -- nvmf/common.sh@410 -- # return 0 00:32:10.270 20:49:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:10.270 20:49:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.270 20:49:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:10.270 20:49:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.270 20:49:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:10.270 20:49:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:10.270 20:49:28 -- host/fio.sh@14 -- # [[ y != y ]] 00:32:10.270 20:49:28 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:32:10.270 20:49:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:10.270 20:49:28 -- common/autotest_common.sh@10 -- # set +x 00:32:10.270 20:49:28 -- host/fio.sh@22 -- # nvmfpid=3734736 00:32:10.270 20:49:28 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:10.270 20:49:28 -- host/fio.sh@26 -- # waitforlisten 3734736 00:32:10.270 20:49:28 -- common/autotest_common.sh@819 -- # '[' -z 3734736 ']' 00:32:10.270 20:49:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.270 20:49:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:10.270 20:49:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.270 20:49:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:10.270 20:49:28 -- common/autotest_common.sh@10 -- # set +x 00:32:10.270 20:49:28 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:10.270 [2024-04-26 20:49:28.478183] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:10.270 [2024-04-26 20:49:28.478297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.270 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.270 [2024-04-26 20:49:28.601520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:10.529 [2024-04-26 20:49:28.699149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:10.529 [2024-04-26 20:49:28.699335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.529 [2024-04-26 20:49:28.699351] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.529 [2024-04-26 20:49:28.699361] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.529 [2024-04-26 20:49:28.699443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.529 [2024-04-26 20:49:28.699469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.529 [2024-04-26 20:49:28.699570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.529 [2024-04-26 20:49:28.699579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:11.095 20:49:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:11.095 20:49:29 -- common/autotest_common.sh@852 -- # return 0 00:32:11.095 20:49:29 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:11.095 20:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.095 20:49:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.095 [2024-04-26 20:49:29.170527] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.095 20:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.095 20:49:29 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:32:11.095 20:49:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:11.095 20:49:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.095 20:49:29 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:11.095 20:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.095 20:49:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.095 Malloc1 00:32:11.095 20:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.095 20:49:29 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:11.095 20:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.095 20:49:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.095 20:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.095 20:49:29 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:11.095 20:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.095 20:49:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.095 20:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.095 20:49:29 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.095 20:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.095 20:49:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.095 [2024-04-26 20:49:29.268396] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.095 20:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.095 20:49:29 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:11.095 20:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.095 20:49:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.095 20:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.095 20:49:29 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:32:11.095 20:49:29 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:11.095 20:49:29 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:11.095 20:49:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:11.095 20:49:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:11.095 20:49:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:11.095 20:49:29 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:11.095 20:49:29 -- common/autotest_common.sh@1320 -- # shift 00:32:11.095 20:49:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:11.095 20:49:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:11.095 20:49:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:11.095 20:49:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:11.095 20:49:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:11.095 20:49:29 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:11.095 20:49:29 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:11.095 20:49:29 -- common/autotest_common.sh@1326 -- # break 00:32:11.095 20:49:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:11.095 20:49:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:11.354 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:11.354 fio-3.35 00:32:11.354 Starting 1 thread 00:32:11.612 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.145 00:32:14.145 test: (groupid=0, jobs=1): err= 0: pid=3735206: Fri Apr 26 20:49:32 2024 00:32:14.145 read: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(102MiB/2005msec) 00:32:14.145 slat (nsec): min=1596, max=143913, avg=2706.70, stdev=1349.13 00:32:14.145 clat (usec): min=3192, max=9470, avg=5418.56, stdev=399.91 00:32:14.145 lat (usec): min=3217, max=9471, avg=5421.27, stdev=399.86 00:32:14.145 clat percentiles (usec): 00:32:14.145 | 1.00th=[ 4621], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5080], 00:32:14.145 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:32:14.145 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 6063], 00:32:14.145 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7832], 99.95th=[ 8586], 00:32:14.145 | 99.99th=[ 9372] 00:32:14.145 bw ( KiB/s): min=50328, max=52720, per=99.98%, avg=51854.00, stdev=1050.70, samples=4 00:32:14.145 iops : min=12582, max=13180, avg=12963.50, stdev=262.68, samples=4 00:32:14.145 write: IOPS=13.0k, BW=50.6MiB/s (53.0MB/s)(101MiB/2005msec); 0 zone resets 00:32:14.145 slat (nsec): min=1643, max=129930, avg=2787.34, stdev=1055.97 00:32:14.145 clat (usec): min=1473, max=8503, avg=4401.56, stdev=336.88 00:32:14.145 lat (usec): min=1483, max=8504, avg=4404.34, stdev=336.91 00:32:14.145 clat percentiles (usec): 00:32:14.145 | 1.00th=[ 3687], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4146], 00:32:14.145 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:32:14.145 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 4883], 00:32:14.145 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 6390], 99.95th=[ 7308], 00:32:14.145 | 99.99th=[ 8455] 00:32:14.145 bw ( KiB/s): min=50872, max=52416, per=100.00%, avg=51822.00, stdev=722.61, samples=4 00:32:14.145 iops : min=12718, max=13104, avg=12955.50, stdev=180.65, samples=4 00:32:14.145 lat (msec) : 2=0.02%, 4=4.45%, 10=95.54% 00:32:14.145 cpu : usr=85.53%, sys=14.07%, ctx=4, majf=0, minf=1526 00:32:14.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:14.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:14.145 issued rwts: total=25997,25968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:14.145 00:32:14.145 Run status group 0 (all jobs): 00:32:14.145 READ: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=102MiB (106MB), run=2005-2005msec 00:32:14.145 WRITE: bw=50.6MiB/s (53.0MB/s), 50.6MiB/s-50.6MiB/s (53.0MB/s-53.0MB/s), io=101MiB (106MB), run=2005-2005msec 00:32:14.145 ----------------------------------------------------- 00:32:14.145 Suppressions used: 00:32:14.145 count bytes template 00:32:14.145 1 57 /usr/src/fio/parse.c 00:32:14.145 1 8 libtcmalloc_minimal.so 00:32:14.145 ----------------------------------------------------- 00:32:14.145 00:32:14.145 20:49:32 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:14.145 20:49:32 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:14.145 20:49:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:14.145 20:49:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.145 20:49:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:14.145 20:49:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.145 20:49:32 -- common/autotest_common.sh@1320 -- # shift 00:32:14.145 20:49:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:14.145 20:49:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.145 20:49:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.145 20:49:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:14.145 20:49:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:14.145 20:49:32 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:14.145 20:49:32 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:14.145 20:49:32 -- common/autotest_common.sh@1326 -- # break 00:32:14.145 20:49:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:14.145 20:49:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:14.711 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:14.711 fio-3.35 00:32:14.711 Starting 1 thread 00:32:14.711 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.238 00:32:17.238 test: (groupid=0, jobs=1): err= 0: pid=3735944: Fri Apr 26 20:49:35 2024 00:32:17.238 read: IOPS=9155, BW=143MiB/s (150MB/s)(287MiB/2006msec) 00:32:17.238 slat (usec): min=2, max=142, avg= 3.76, stdev= 1.88 00:32:17.238 clat (usec): min=2318, max=51104, avg=8594.35, stdev=4156.75 00:32:17.238 lat (usec): min=2320, max=51107, avg=8598.10, stdev=4157.20 00:32:17.238 clat percentiles (usec): 00:32:17.238 | 1.00th=[ 3916], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5932], 00:32:17.238 | 30.00th=[ 6652], 40.00th=[ 7373], 50.00th=[ 7832], 60.00th=[ 8717], 00:32:17.238 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[12256], 95.00th=[13435], 00:32:17.238 | 99.00th=[15401], 99.50th=[45351], 99.90th=[50070], 99.95th=[50594], 00:32:17.238 | 99.99th=[51119] 00:32:17.238 bw ( KiB/s): min=53152, max=86816, per=48.99%, avg=71760.00, stdev=14644.81, samples=4 00:32:17.238 iops : min= 3322, max= 5426, avg=4485.00, stdev=915.30, samples=4 00:32:17.238 write: IOPS=5426, BW=84.8MiB/s (88.9MB/s)(147MiB/1732msec); 0 zone resets 00:32:17.238 slat (usec): min=28, max=199, avg=39.36, stdev=11.59 00:32:17.238 clat (usec): min=3154, max=17389, avg=9514.19, stdev=2413.59 00:32:17.238 lat (usec): min=3182, max=17440, avg=9553.55, stdev=2422.77 00:32:17.238 clat percentiles (usec): 00:32:17.238 | 1.00th=[ 5473], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7177], 00:32:17.238 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 9241], 60.00th=[10290], 00:32:17.238 | 70.00th=[11207], 80.00th=[11863], 90.00th=[12780], 95.00th=[13566], 00:32:17.238 | 99.00th=[14746], 99.50th=[15139], 99.90th=[16450], 99.95th=[16712], 00:32:17.238 | 99.99th=[17433] 00:32:17.238 bw ( KiB/s): min=55488, max=91136, per=86.31%, avg=74928.00, stdev=15448.07, samples=4 00:32:17.238 iops : min= 3468, max= 5696, avg=4683.00, stdev=965.50, samples=4 00:32:17.238 lat (msec) : 4=0.91%, 10=66.47%, 20=32.16%, 50=0.38%, 100=0.08% 00:32:17.238 cpu : usr=88.53%, sys=11.07%, ctx=9, majf=0, minf=2310 00:32:17.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:32:17.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.238 issued rwts: total=18365,9398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.238 00:32:17.238 Run status group 0 (all jobs): 00:32:17.238 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=287MiB (301MB), run=2006-2006msec 00:32:17.238 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=147MiB (154MB), run=1732-1732msec 00:32:17.238 ----------------------------------------------------- 00:32:17.238 Suppressions used: 00:32:17.238 count bytes template 00:32:17.238 1 57 /usr/src/fio/parse.c 00:32:17.238 34 3264 /usr/src/fio/iolog.c 00:32:17.238 1 8 libtcmalloc_minimal.so 00:32:17.238 ----------------------------------------------------- 00:32:17.238 00:32:17.238 20:49:35 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.238 20:49:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.238 20:49:35 -- common/autotest_common.sh@10 -- # set +x 00:32:17.238 20:49:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.238 20:49:35 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:32:17.238 20:49:35 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:32:17.238 20:49:35 -- host/fio.sh@49 -- # get_nvme_bdfs 00:32:17.238 20:49:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:17.238 20:49:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:17.238 20:49:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:17.238 20:49:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:17.238 20:49:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:17.238 20:49:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:32:17.238 20:49:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:32:17.238 20:49:35 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:c9:00.0 -i 10.0.0.2 00:32:17.238 20:49:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.238 20:49:35 -- common/autotest_common.sh@10 -- # set +x 00:32:20.525 Nvme0n1 00:32:20.525 20:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.525 20:49:38 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:20.525 20:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.525 20:49:38 -- common/autotest_common.sh@10 -- # set +x 00:32:25.792 20:49:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.792 20:49:43 -- host/fio.sh@51 -- # ls_guid=226e04af-77ab-4914-8368-1290e59ff2ff 00:32:25.792 20:49:43 -- host/fio.sh@52 -- # get_lvs_free_mb 226e04af-77ab-4914-8368-1290e59ff2ff 00:32:25.792 20:49:43 -- common/autotest_common.sh@1343 -- # local lvs_uuid=226e04af-77ab-4914-8368-1290e59ff2ff 00:32:25.792 20:49:43 -- common/autotest_common.sh@1344 -- # local lvs_info 00:32:25.792 20:49:43 -- common/autotest_common.sh@1345 -- # local fc 00:32:25.792 20:49:43 -- common/autotest_common.sh@1346 -- # local cs 00:32:25.792 20:49:43 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:25.792 20:49:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.792 20:49:43 -- common/autotest_common.sh@10 -- # set +x 00:32:25.792 20:49:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.792 20:49:43 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:32:25.792 { 00:32:25.792 "uuid": "226e04af-77ab-4914-8368-1290e59ff2ff", 00:32:25.792 "name": "lvs_0", 00:32:25.792 "base_bdev": "Nvme0n1", 00:32:25.792 "total_data_clusters": 1862, 00:32:25.792 "free_clusters": 1862, 00:32:25.792 "block_size": 512, 00:32:25.792 "cluster_size": 1073741824 00:32:25.792 } 00:32:25.792 ]' 00:32:25.792 20:49:43 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="226e04af-77ab-4914-8368-1290e59ff2ff") .free_clusters' 00:32:25.792 20:49:43 -- common/autotest_common.sh@1348 -- # fc=1862 00:32:25.792 20:49:43 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="226e04af-77ab-4914-8368-1290e59ff2ff") .cluster_size' 00:32:25.792 20:49:43 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:32:25.792 20:49:43 -- common/autotest_common.sh@1352 -- # free_mb=1906688 00:32:25.792 20:49:43 -- common/autotest_common.sh@1353 -- # echo 1906688 00:32:25.792 1906688 00:32:25.792 20:49:43 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1906688 00:32:25.792 20:49:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.792 20:49:43 -- common/autotest_common.sh@10 -- # set +x 00:32:25.792 85ace554-e97d-46bf-b4e1-c8811a588c55 00:32:25.792 20:49:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.792 20:49:44 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:25.792 20:49:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.792 20:49:44 -- common/autotest_common.sh@10 -- # set +x 00:32:25.792 20:49:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.792 20:49:44 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:25.792 20:49:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.792 20:49:44 -- common/autotest_common.sh@10 -- # set +x 00:32:25.792 20:49:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.792 20:49:44 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:25.792 20:49:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.792 20:49:44 -- common/autotest_common.sh@10 -- # set +x 00:32:25.792 20:49:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.792 20:49:44 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:25.792 20:49:44 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:25.792 20:49:44 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:25.792 20:49:44 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:25.792 20:49:44 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:25.792 20:49:44 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:25.792 20:49:44 -- common/autotest_common.sh@1320 -- # shift 00:32:25.792 20:49:44 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:25.792 20:49:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.792 20:49:44 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:25.792 20:49:44 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:25.792 20:49:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:26.062 20:49:44 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:26.062 20:49:44 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:26.062 20:49:44 -- common/autotest_common.sh@1326 -- # break 00:32:26.062 20:49:44 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:26.062 20:49:44 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:26.321 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:26.321 fio-3.35 00:32:26.321 Starting 1 thread 00:32:26.321 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.848 00:32:28.848 test: (groupid=0, jobs=1): err= 0: pid=3738389: Fri Apr 26 20:49:46 2024 00:32:28.848 read: IOPS=6912, BW=27.0MiB/s (28.3MB/s)(54.2MiB/2006msec) 00:32:28.848 slat (nsec): min=1579, max=152035, avg=2174.61, stdev=1869.23 00:32:28.848 clat (usec): min=424, max=482209, avg=10056.48, stdev=32364.88 00:32:28.848 lat (usec): min=427, max=482217, avg=10058.65, stdev=32365.19 00:32:28.848 clat percentiles (msec): 00:32:28.848 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:32:28.848 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:32:28.848 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:32:28.848 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 481], 99.95th=[ 481], 00:32:28.848 | 99.99th=[ 481] 00:32:28.848 bw ( KiB/s): min= 1456, max=36672, per=99.82%, avg=27598.00, stdev=17430.32, samples=4 00:32:28.848 iops : min= 364, max= 9168, avg=6899.50, stdev=4357.58, samples=4 00:32:28.848 write: IOPS=6918, BW=27.0MiB/s (28.3MB/s)(54.2MiB/2006msec); 0 zone resets 00:32:28.848 slat (nsec): min=1648, max=133917, avg=2288.33, stdev=1338.73 00:32:28.848 clat (usec): min=338, max=480329, avg=8344.70, stdev=31413.97 00:32:28.848 lat (usec): min=343, max=480337, avg=8346.99, stdev=31414.26 00:32:28.848 clat percentiles (msec): 00:32:28.848 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:28.848 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:28.848 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 8], 00:32:28.848 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 481], 99.95th=[ 481], 00:32:28.848 | 99.99th=[ 481] 00:32:28.848 bw ( KiB/s): min= 1560, max=36600, per=99.87%, avg=27636.00, stdev=17385.46, samples=4 00:32:28.848 iops : min= 390, max= 9150, avg=6909.00, stdev=4346.37, samples=4 00:32:28.848 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:28.848 lat (msec) : 2=0.06%, 4=0.24%, 10=99.07%, 20=0.14%, 500=0.46% 00:32:28.848 cpu : usr=88.18%, sys=11.47%, ctx=4, majf=0, minf=1522 00:32:28.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:28.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:28.848 issued rwts: total=13866,13878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:28.848 00:32:28.848 Run status group 0 (all jobs): 00:32:28.848 READ: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=54.2MiB (56.8MB), run=2006-2006msec 00:32:28.848 WRITE: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=54.2MiB (56.8MB), run=2006-2006msec 00:32:28.848 ----------------------------------------------------- 00:32:28.848 Suppressions used: 00:32:28.848 count bytes template 00:32:28.848 1 58 /usr/src/fio/parse.c 00:32:28.848 1 8 libtcmalloc_minimal.so 00:32:28.848 ----------------------------------------------------- 00:32:28.848 00:32:28.848 20:49:47 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:28.848 20:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:28.848 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:32:28.848 20:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:28.848 20:49:47 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:28.848 20:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:28.848 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:32:29.782 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.042 20:49:48 -- host/fio.sh@62 -- # ls_nested_guid=e920e0f8-a87f-4db6-bf43-0e1a59150d96 00:32:30.042 20:49:48 -- host/fio.sh@63 -- # get_lvs_free_mb e920e0f8-a87f-4db6-bf43-0e1a59150d96 00:32:30.042 20:49:48 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e920e0f8-a87f-4db6-bf43-0e1a59150d96 00:32:30.042 20:49:48 -- common/autotest_common.sh@1344 -- # local lvs_info 00:32:30.042 20:49:48 -- common/autotest_common.sh@1345 -- # local fc 00:32:30.042 20:49:48 -- common/autotest_common.sh@1346 -- # local cs 00:32:30.042 20:49:48 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:30.042 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.042 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:32:30.042 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.042 20:49:48 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:32:30.042 { 00:32:30.042 "uuid": "226e04af-77ab-4914-8368-1290e59ff2ff", 00:32:30.042 "name": "lvs_0", 00:32:30.042 "base_bdev": "Nvme0n1", 00:32:30.042 "total_data_clusters": 1862, 00:32:30.042 "free_clusters": 0, 00:32:30.042 "block_size": 512, 00:32:30.042 "cluster_size": 1073741824 00:32:30.042 }, 00:32:30.042 { 00:32:30.042 "uuid": "e920e0f8-a87f-4db6-bf43-0e1a59150d96", 00:32:30.042 "name": "lvs_n_0", 00:32:30.042 "base_bdev": "85ace554-e97d-46bf-b4e1-c8811a588c55", 00:32:30.042 "total_data_clusters": 476206, 00:32:30.042 "free_clusters": 476206, 00:32:30.042 "block_size": 512, 00:32:30.042 "cluster_size": 4194304 00:32:30.042 } 00:32:30.042 ]' 00:32:30.042 20:49:48 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e920e0f8-a87f-4db6-bf43-0e1a59150d96") .free_clusters' 00:32:30.042 20:49:48 -- common/autotest_common.sh@1348 -- # fc=476206 00:32:30.042 20:49:48 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e920e0f8-a87f-4db6-bf43-0e1a59150d96") .cluster_size' 00:32:30.042 20:49:48 -- common/autotest_common.sh@1349 -- # cs=4194304 00:32:30.042 20:49:48 -- common/autotest_common.sh@1352 -- # free_mb=1904824 00:32:30.042 20:49:48 -- common/autotest_common.sh@1353 -- # echo 1904824 00:32:30.042 1904824 00:32:30.042 20:49:48 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:32:30.042 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.042 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:32:31.946 0298f39d-ae35-44b5-9898-d00051c13933 00:32:31.946 20:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.946 20:49:49 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:31.946 20:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.946 20:49:49 -- common/autotest_common.sh@10 -- # set +x 00:32:31.946 20:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.946 20:49:49 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:31.946 20:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.946 20:49:49 -- common/autotest_common.sh@10 -- # set +x 00:32:31.946 20:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.946 20:49:49 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:31.946 20:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.946 20:49:49 -- common/autotest_common.sh@10 -- # set +x 00:32:31.946 20:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.946 20:49:49 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:31.946 20:49:49 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:31.947 20:49:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:31.947 20:49:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:31.947 20:49:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:31.947 20:49:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:31.947 20:49:49 -- common/autotest_common.sh@1320 -- # shift 00:32:31.947 20:49:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:31.947 20:49:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.947 20:49:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:31.947 20:49:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:31.947 20:49:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:31.947 20:49:49 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:31.947 20:49:49 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:31.947 20:49:49 -- common/autotest_common.sh@1326 -- # break 00:32:31.947 20:49:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:31.947 20:49:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:32.205 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:32.205 fio-3.35 00:32:32.205 Starting 1 thread 00:32:32.205 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.736 00:32:34.736 test: (groupid=0, jobs=1): err= 0: pid=3739607: Fri Apr 26 20:49:52 2024 00:32:34.736 read: IOPS=8687, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2006msec) 00:32:34.736 slat (nsec): min=1607, max=124973, avg=1920.80, stdev=1273.04 00:32:34.736 clat (usec): min=3827, max=13158, avg=8164.77, stdev=669.11 00:32:34.736 lat (usec): min=3834, max=13159, avg=8166.69, stdev=669.03 00:32:34.736 clat percentiles (usec): 00:32:34.736 | 1.00th=[ 6718], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7635], 00:32:34.736 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8291], 00:32:34.736 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9241], 00:32:34.736 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[11207], 99.95th=[12911], 00:32:34.736 | 99.99th=[13173] 00:32:34.736 bw ( KiB/s): min=33464, max=35544, per=99.91%, avg=34720.00, stdev=901.18, samples=4 00:32:34.736 iops : min= 8366, max= 8886, avg=8680.00, stdev=225.29, samples=4 00:32:34.736 write: IOPS=8680, BW=33.9MiB/s (35.6MB/s)(68.0MiB/2006msec); 0 zone resets 00:32:34.736 slat (nsec): min=1675, max=92948, avg=2014.48, stdev=785.96 00:32:34.736 clat (usec): min=1934, max=11208, avg=6499.76, stdev=588.39 00:32:34.736 lat (usec): min=1945, max=11210, avg=6501.77, stdev=588.36 00:32:34.736 clat percentiles (usec): 00:32:34.736 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 6063], 00:32:34.736 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6521], 60.00th=[ 6652], 00:32:34.736 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7177], 95.00th=[ 7373], 00:32:34.736 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[ 9896], 99.95th=[10945], 00:32:34.736 | 99.99th=[11207] 00:32:34.736 bw ( KiB/s): min=34392, max=34992, per=99.96%, avg=34710.00, stdev=256.11, samples=4 00:32:34.736 iops : min= 8598, max= 8748, avg=8677.50, stdev=64.03, samples=4 00:32:34.736 lat (msec) : 2=0.01%, 4=0.09%, 10=99.53%, 20=0.38% 00:32:34.736 cpu : usr=85.94%, sys=13.72%, ctx=4, majf=0, minf=1524 00:32:34.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:34.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:34.736 issued rwts: total=17428,17414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:34.736 00:32:34.736 Run status group 0 (all jobs): 00:32:34.736 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2006-2006msec 00:32:34.736 WRITE: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.0MiB (71.3MB), run=2006-2006msec 00:32:34.736 ----------------------------------------------------- 00:32:34.736 Suppressions used: 00:32:34.736 count bytes template 00:32:34.736 1 58 /usr/src/fio/parse.c 00:32:34.736 1 8 libtcmalloc_minimal.so 00:32:34.736 ----------------------------------------------------- 00:32:34.736 00:32:34.736 20:49:53 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:34.736 20:49:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.736 20:49:53 -- common/autotest_common.sh@10 -- # set +x 00:32:34.736 20:49:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.736 20:49:53 -- host/fio.sh@72 -- # sync 00:32:34.736 20:49:53 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:34.736 20:49:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.736 20:49:53 -- common/autotest_common.sh@10 -- # set +x 00:32:42.857 20:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:42.857 20:50:01 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:32:42.857 20:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:42.857 20:50:01 -- common/autotest_common.sh@10 -- # set +x 00:32:42.857 20:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:42.857 20:50:01 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:32:42.857 20:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:42.857 20:50:01 -- common/autotest_common.sh@10 -- # set +x 00:32:49.493 20:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.493 20:50:06 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:32:49.493 20:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.493 20:50:06 -- common/autotest_common.sh@10 -- # set +x 00:32:49.493 20:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.493 20:50:06 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:32:49.493 20:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.493 20:50:06 -- common/autotest_common.sh@10 -- # set +x 00:32:51.400 20:50:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:51.400 20:50:09 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:32:51.400 20:50:09 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:32:51.400 20:50:09 -- host/fio.sh@84 -- # nvmftestfini 00:32:51.400 20:50:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:51.400 20:50:09 -- nvmf/common.sh@116 -- # sync 00:32:51.400 20:50:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:51.400 20:50:09 -- nvmf/common.sh@119 -- # set +e 00:32:51.400 20:50:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:51.400 20:50:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:51.400 rmmod nvme_tcp 00:32:51.400 rmmod nvme_fabrics 00:32:51.400 rmmod nvme_keyring 00:32:51.400 20:50:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:51.400 20:50:09 -- nvmf/common.sh@123 -- # set -e 00:32:51.400 20:50:09 -- nvmf/common.sh@124 -- # return 0 00:32:51.400 20:50:09 -- nvmf/common.sh@477 -- # '[' -n 3734736 ']' 00:32:51.400 20:50:09 -- nvmf/common.sh@478 -- # killprocess 3734736 00:32:51.400 20:50:09 -- common/autotest_common.sh@926 -- # '[' -z 3734736 ']' 00:32:51.400 20:50:09 -- common/autotest_common.sh@930 -- # kill -0 3734736 00:32:51.400 20:50:09 -- common/autotest_common.sh@931 -- # uname 00:32:51.400 20:50:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:51.400 20:50:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3734736 00:32:51.400 20:50:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:51.400 20:50:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:51.660 20:50:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3734736' 00:32:51.660 killing process with pid 3734736 00:32:51.660 20:50:09 -- common/autotest_common.sh@945 -- # kill 3734736 00:32:51.660 20:50:09 -- common/autotest_common.sh@950 -- # wait 3734736 00:32:52.231 20:50:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:52.231 20:50:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:52.231 20:50:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:52.231 20:50:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:52.231 20:50:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:52.231 20:50:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.231 20:50:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:52.231 20:50:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.136 20:50:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:54.136 00:32:54.136 real 0m49.295s 00:32:54.136 user 3m53.987s 00:32:54.136 sys 0m8.745s 00:32:54.136 20:50:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:54.136 20:50:12 -- common/autotest_common.sh@10 -- # set +x 00:32:54.136 ************************************ 00:32:54.136 END TEST nvmf_fio_host 00:32:54.136 ************************************ 00:32:54.136 20:50:12 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:54.136 20:50:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:54.136 20:50:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:54.136 20:50:12 -- common/autotest_common.sh@10 -- # set +x 00:32:54.136 ************************************ 00:32:54.136 START TEST nvmf_failover 00:32:54.136 ************************************ 00:32:54.136 20:50:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:54.136 * Looking for test storage... 00:32:54.136 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:32:54.136 20:50:12 -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.136 20:50:12 -- nvmf/common.sh@7 -- # uname -s 00:32:54.136 20:50:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.136 20:50:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.136 20:50:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.136 20:50:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.136 20:50:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.137 20:50:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.137 20:50:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.137 20:50:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.137 20:50:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.137 20:50:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.137 20:50:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:32:54.137 20:50:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:32:54.137 20:50:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.137 20:50:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.137 20:50:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:54.137 20:50:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:54.137 20:50:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.137 20:50:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.137 20:50:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.137 20:50:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.137 20:50:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.137 20:50:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.137 20:50:12 -- paths/export.sh@5 -- # export PATH 00:32:54.137 20:50:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.137 20:50:12 -- nvmf/common.sh@46 -- # : 0 00:32:54.137 20:50:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:54.137 20:50:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:54.137 20:50:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:54.137 20:50:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.137 20:50:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.137 20:50:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:54.137 20:50:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:54.137 20:50:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:54.137 20:50:12 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:54.137 20:50:12 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:54.137 20:50:12 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:32:54.137 20:50:12 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:54.137 20:50:12 -- host/failover.sh@18 -- # nvmftestinit 00:32:54.137 20:50:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:54.137 20:50:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.137 20:50:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:54.137 20:50:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:54.137 20:50:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:54.137 20:50:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.137 20:50:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:54.137 20:50:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.137 20:50:12 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:32:54.137 20:50:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:54.137 20:50:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:54.137 20:50:12 -- common/autotest_common.sh@10 -- # set +x 00:32:59.416 20:50:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:59.416 20:50:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:59.416 20:50:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:59.416 20:50:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:59.417 20:50:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:59.417 20:50:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:59.417 20:50:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:59.417 20:50:17 -- nvmf/common.sh@294 -- # net_devs=() 00:32:59.417 20:50:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:59.417 20:50:17 -- nvmf/common.sh@295 -- # e810=() 00:32:59.417 20:50:17 -- nvmf/common.sh@295 -- # local -ga e810 00:32:59.417 20:50:17 -- nvmf/common.sh@296 -- # x722=() 00:32:59.417 20:50:17 -- nvmf/common.sh@296 -- # local -ga x722 00:32:59.417 20:50:17 -- nvmf/common.sh@297 -- # mlx=() 00:32:59.417 20:50:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:59.417 20:50:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.417 20:50:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:59.417 20:50:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:59.417 20:50:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:59.417 20:50:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:59.417 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:59.417 20:50:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:59.417 20:50:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:59.417 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:59.417 20:50:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:59.417 20:50:17 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:59.417 20:50:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.417 20:50:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:59.417 20:50:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.417 20:50:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:59.417 Found net devices under 0000:27:00.0: cvl_0_0 00:32:59.417 20:50:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.417 20:50:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:59.417 20:50:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.417 20:50:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:59.417 20:50:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.417 20:50:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:59.417 Found net devices under 0000:27:00.1: cvl_0_1 00:32:59.417 20:50:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.417 20:50:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:59.417 20:50:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:59.417 20:50:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:59.417 20:50:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:59.417 20:50:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.417 20:50:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.417 20:50:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.417 20:50:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:59.417 20:50:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.417 20:50:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.417 20:50:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:59.417 20:50:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.417 20:50:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.417 20:50:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:59.417 20:50:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:59.417 20:50:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.417 20:50:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.417 20:50:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.417 20:50:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.417 20:50:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:59.417 20:50:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.675 20:50:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.675 20:50:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.675 20:50:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:59.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:32:59.675 00:32:59.675 --- 10.0.0.2 ping statistics --- 00:32:59.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.675 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:32:59.675 20:50:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:32:59.675 00:32:59.675 --- 10.0.0.1 ping statistics --- 00:32:59.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.675 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:32:59.675 20:50:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.675 20:50:17 -- nvmf/common.sh@410 -- # return 0 00:32:59.675 20:50:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:59.675 20:50:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.675 20:50:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:59.675 20:50:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:59.675 20:50:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.675 20:50:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:59.675 20:50:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:59.675 20:50:17 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:59.675 20:50:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:59.675 20:50:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:59.675 20:50:17 -- common/autotest_common.sh@10 -- # set +x 00:32:59.675 20:50:17 -- nvmf/common.sh@469 -- # nvmfpid=3747364 00:32:59.675 20:50:17 -- nvmf/common.sh@470 -- # waitforlisten 3747364 00:32:59.675 20:50:17 -- common/autotest_common.sh@819 -- # '[' -z 3747364 ']' 00:32:59.675 20:50:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.675 20:50:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:59.675 20:50:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.675 20:50:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:59.675 20:50:17 -- common/autotest_common.sh@10 -- # set +x 00:32:59.675 20:50:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:59.675 [2024-04-26 20:50:17.901659] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:59.675 [2024-04-26 20:50:17.901765] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.675 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.932 [2024-04-26 20:50:18.020789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:59.932 [2024-04-26 20:50:18.117023] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:59.932 [2024-04-26 20:50:18.117195] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.932 [2024-04-26 20:50:18.117207] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.932 [2024-04-26 20:50:18.117216] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.932 [2024-04-26 20:50:18.117357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:59.932 [2024-04-26 20:50:18.117468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.932 [2024-04-26 20:50:18.117478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.498 20:50:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:00.498 20:50:18 -- common/autotest_common.sh@852 -- # return 0 00:33:00.498 20:50:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:00.498 20:50:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:00.498 20:50:18 -- common/autotest_common.sh@10 -- # set +x 00:33:00.498 20:50:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.498 20:50:18 -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:00.498 [2024-04-26 20:50:18.741311] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.498 20:50:18 -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:00.757 Malloc0 00:33:00.757 20:50:18 -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:01.017 20:50:19 -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:01.017 20:50:19 -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:01.275 [2024-04-26 20:50:19.394302] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.275 20:50:19 -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:01.275 [2024-04-26 20:50:19.542362] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:01.275 20:50:19 -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:01.532 [2024-04-26 20:50:19.686527] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:01.532 20:50:19 -- host/failover.sh@31 -- # bdevperf_pid=3747705 00:33:01.532 20:50:19 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:01.532 20:50:19 -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:01.532 20:50:19 -- host/failover.sh@34 -- # waitforlisten 3747705 /var/tmp/bdevperf.sock 00:33:01.532 20:50:19 -- common/autotest_common.sh@819 -- # '[' -z 3747705 ']' 00:33:01.532 20:50:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:01.532 20:50:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:01.532 20:50:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:01.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:01.532 20:50:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:01.532 20:50:19 -- common/autotest_common.sh@10 -- # set +x 00:33:02.465 20:50:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:02.465 20:50:20 -- common/autotest_common.sh@852 -- # return 0 00:33:02.465 20:50:20 -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:02.465 NVMe0n1 00:33:02.465 20:50:20 -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:02.726 00:33:02.726 20:50:20 -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:02.726 20:50:20 -- host/failover.sh@39 -- # run_test_pid=3748003 00:33:02.726 20:50:20 -- host/failover.sh@41 -- # sleep 1 00:33:03.665 20:50:21 -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.927 [2024-04-26 20:50:22.047219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 [2024-04-26 20:50:22.047681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:33:03.927 20:50:22 -- host/failover.sh@45 -- # sleep 3 00:33:07.218 20:50:25 -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:07.218 00:33:07.218 20:50:25 -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:07.218 [2024-04-26 20:50:25.515735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.218 [2024-04-26 20:50:25.515793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.218 [2024-04-26 20:50:25.515802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.218 [2024-04-26 20:50:25.515809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.218 [2024-04-26 20:50:25.515816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.218 [2024-04-26 20:50:25.515823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.218 [2024-04-26 20:50:25.515831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 [2024-04-26 20:50:25.515838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 [2024-04-26 20:50:25.515845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 [2024-04-26 20:50:25.515852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 [2024-04-26 20:50:25.515860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 [2024-04-26 20:50:25.515881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 [2024-04-26 20:50:25.515888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 [2024-04-26 20:50:25.515895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 [2024-04-26 20:50:25.515903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:33:07.219 20:50:25 -- host/failover.sh@50 -- # sleep 3 00:33:10.506 20:50:28 -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.506 [2024-04-26 20:50:28.681110] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.506 20:50:28 -- host/failover.sh@55 -- # sleep 1 00:33:11.439 20:50:29 -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:11.700 [2024-04-26 20:50:29.822910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.822974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.822982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.822990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.822997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 [2024-04-26 20:50:29.823317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:11.700 20:50:29 -- host/failover.sh@59 -- # wait 3748003 00:33:18.281 0 00:33:18.281 20:50:36 -- host/failover.sh@61 -- # killprocess 3747705 00:33:18.281 20:50:36 -- common/autotest_common.sh@926 -- # '[' -z 3747705 ']' 00:33:18.281 20:50:36 -- common/autotest_common.sh@930 -- # kill -0 3747705 00:33:18.281 20:50:36 -- common/autotest_common.sh@931 -- # uname 00:33:18.281 20:50:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:18.281 20:50:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3747705 00:33:18.281 20:50:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:18.281 20:50:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:18.281 20:50:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3747705' 00:33:18.281 killing process with pid 3747705 00:33:18.281 20:50:36 -- common/autotest_common.sh@945 -- # kill 3747705 00:33:18.281 20:50:36 -- common/autotest_common.sh@950 -- # wait 3747705 00:33:18.281 20:50:36 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:18.281 [2024-04-26 20:50:19.774296] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:18.281 [2024-04-26 20:50:19.774423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747705 ] 00:33:18.281 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.281 [2024-04-26 20:50:19.887171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.281 [2024-04-26 20:50:19.977404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.281 Running I/O for 15 seconds... 00:33:18.281 [2024-04-26 20:50:22.048110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-04-26 20:50:22.048505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.281 [2024-04-26 20:50:22.048515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.048875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.048945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.048962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.048988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.048996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.049012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.049029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.282 [2024-04-26 20:50:22.049181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.049199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.282 [2024-04-26 20:50:22.049209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.282 [2024-04-26 20:50:22.049217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.283 [2024-04-26 20:50:22.049846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.283 [2024-04-26 20:50:22.049899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.283 [2024-04-26 20:50:22.049909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.049916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.049927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.049935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.049945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.049952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.049962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.284 [2024-04-26 20:50:22.049970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.049980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.049987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.049997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.284 [2024-04-26 20:50:22.050183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.284 [2024-04-26 20:50:22.050224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.284 [2024-04-26 20:50:22.050293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:22.050418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000042c0 is same with the state(5) to be set 00:33:18.284 [2024-04-26 20:50:22.050440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:18.284 [2024-04-26 20:50:22.050448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:18.284 [2024-04-26 20:50:22.050458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:8 PRP1 0x0 PRP2 0x0 00:33:18.284 [2024-04-26 20:50:22.050468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050591] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6130000042c0 was disconnected and freed. reset controller. 00:33:18.284 [2024-04-26 20:50:22.050616] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:18.284 [2024-04-26 20:50:22.050650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.284 [2024-04-26 20:50:22.050663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.284 [2024-04-26 20:50:22.050682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.284 [2024-04-26 20:50:22.050699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.284 [2024-04-26 20:50:22.050716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:22.050724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.284 [2024-04-26 20:50:22.050777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:33:18.284 [2024-04-26 20:50:22.052442] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.284 [2024-04-26 20:50:22.071062] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:18.284 [2024-04-26 20:50:25.516033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:25.516085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:25.516111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.284 [2024-04-26 20:50:25.516125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.284 [2024-04-26 20:50:25.516136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.285 [2024-04-26 20:50:25.516215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.285 [2024-04-26 20:50:25.516232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.285 [2024-04-26 20:50:25.516267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.285 [2024-04-26 20:50:25.516329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.285 [2024-04-26 20:50:25.516347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.285 [2024-04-26 20:50:25.516369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.285 [2024-04-26 20:50:25.516407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.285 [2024-04-26 20:50:25.516477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.285 [2024-04-26 20:50:25.516662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.285 [2024-04-26 20:50:25.516670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.286 [2024-04-26 20:50:25.516679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.286 [2024-04-26 20:50:25.516687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.286 [2024-04-26 20:50:25.516697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.286 [2024-04-26 20:50:25.516705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.286 [2024-04-26 20:50:25.516714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.286 [2024-04-26 20:50:25.516721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.516829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.516864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.516898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.516915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.516933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.516984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.516994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.517362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.517378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.287 [2024-04-26 20:50:25.517398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.287 [2024-04-26 20:50:25.517415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.287 [2024-04-26 20:50:25.517424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.517936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.517988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.517997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.518004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.518014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.288 [2024-04-26 20:50:25.518022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.518032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.518039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.518049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.518057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.518067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.518075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.518085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.518093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.518103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.288 [2024-04-26 20:50:25.518110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.288 [2024-04-26 20:50:25.518120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:25.518336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000004640 is same with the state(5) to be set 00:33:18.289 [2024-04-26 20:50:25.518358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:18.289 [2024-04-26 20:50:25.518367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:18.289 [2024-04-26 20:50:25.518376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90704 len:8 PRP1 0x0 PRP2 0x0 00:33:18.289 [2024-04-26 20:50:25.518387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518504] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000004640 was disconnected and freed. reset controller. 00:33:18.289 [2024-04-26 20:50:25.518521] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:18.289 [2024-04-26 20:50:25.518550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.289 [2024-04-26 20:50:25.518563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.289 [2024-04-26 20:50:25.518584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.289 [2024-04-26 20:50:25.518600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.289 [2024-04-26 20:50:25.518616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:25.518625] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.289 [2024-04-26 20:50:25.520360] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.289 [2024-04-26 20:50:25.520395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:33:18.289 [2024-04-26 20:50:25.594336] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:18.289 [2024-04-26 20:50:29.823477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.289 [2024-04-26 20:50:29.823886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.289 [2024-04-26 20:50:29.823896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.823904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.823914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.823922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.823931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.823939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.823948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.823955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.823964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.823972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.823981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.823989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.823998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.290 [2024-04-26 20:50:29.824405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.290 [2024-04-26 20:50:29.824550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.290 [2024-04-26 20:50:29.824558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.824660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.824677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.824694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.824886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.824920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.824936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.824953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.824988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.824997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.825004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.825014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.825022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.825031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.825039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.825048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.825055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.825070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.825078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.825086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.291 [2024-04-26 20:50:29.825094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.825104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.291 [2024-04-26 20:50:29.825111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.291 [2024-04-26 20:50:29.825120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.292 [2024-04-26 20:50:29.825653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.292 [2024-04-26 20:50:29.825774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.292 [2024-04-26 20:50:29.825784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000004d40 is same with the state(5) to be set 00:33:18.292 [2024-04-26 20:50:29.825795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:18.292 [2024-04-26 20:50:29.825803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:18.292 [2024-04-26 20:50:29.825812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86952 len:8 PRP1 0x0 PRP2 0x0 00:33:18.293 [2024-04-26 20:50:29.825821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.293 [2024-04-26 20:50:29.825940] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000004d40 was disconnected and freed. reset controller. 00:33:18.293 [2024-04-26 20:50:29.825963] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:18.293 [2024-04-26 20:50:29.825994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.293 [2024-04-26 20:50:29.826007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.293 [2024-04-26 20:50:29.826017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.293 [2024-04-26 20:50:29.826027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.293 [2024-04-26 20:50:29.826036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.293 [2024-04-26 20:50:29.826043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.293 [2024-04-26 20:50:29.826052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.293 [2024-04-26 20:50:29.826059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.293 [2024-04-26 20:50:29.826068] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.293 [2024-04-26 20:50:29.827845] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.293 [2024-04-26 20:50:29.827877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:33:18.293 [2024-04-26 20:50:29.973290] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:18.293 00:33:18.293 Latency(us) 00:33:18.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.293 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:18.293 Verification LBA range: start 0x0 length 0x4000 00:33:18.293 NVMe0n1 : 15.01 17259.39 67.42 1244.70 0.00 6905.28 521.70 14072.99 00:33:18.293 =================================================================================================================== 00:33:18.293 Total : 17259.39 67.42 1244.70 0.00 6905.28 521.70 14072.99 00:33:18.293 Received shutdown signal, test time was about 15.000000 seconds 00:33:18.293 00:33:18.293 Latency(us) 00:33:18.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.293 =================================================================================================================== 00:33:18.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.293 20:50:36 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:18.293 20:50:36 -- host/failover.sh@65 -- # count=3 00:33:18.293 20:50:36 -- host/failover.sh@67 -- # (( count != 3 )) 00:33:18.293 20:50:36 -- host/failover.sh@73 -- # bdevperf_pid=3751016 00:33:18.293 20:50:36 -- host/failover.sh@75 -- # waitforlisten 3751016 /var/tmp/bdevperf.sock 00:33:18.293 20:50:36 -- common/autotest_common.sh@819 -- # '[' -z 3751016 ']' 00:33:18.293 20:50:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:18.293 20:50:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:18.293 20:50:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:18.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:18.293 20:50:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:18.293 20:50:36 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 20:50:36 -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:19.233 20:50:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:19.233 20:50:37 -- common/autotest_common.sh@852 -- # return 0 00:33:19.233 20:50:37 -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:19.233 [2024-04-26 20:50:37.358953] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:19.233 20:50:37 -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:19.233 [2024-04-26 20:50:37.506962] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:19.233 20:50:37 -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:19.800 NVMe0n1 00:33:19.800 20:50:37 -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:19.800 00:33:19.800 20:50:38 -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:20.366 00:33:20.366 20:50:38 -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:20.366 20:50:38 -- host/failover.sh@82 -- # grep -q NVMe0 00:33:20.366 20:50:38 -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:20.626 20:50:38 -- host/failover.sh@87 -- # sleep 3 00:33:23.922 20:50:41 -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:23.922 20:50:41 -- host/failover.sh@88 -- # grep -q NVMe0 00:33:23.922 20:50:41 -- host/failover.sh@90 -- # run_test_pid=3752088 00:33:23.922 20:50:41 -- host/failover.sh@92 -- # wait 3752088 00:33:23.922 20:50:41 -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:24.863 0 00:33:24.863 20:50:42 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:24.863 [2024-04-26 20:50:36.496967] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:24.863 [2024-04-26 20:50:36.497086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751016 ] 00:33:24.863 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.863 [2024-04-26 20:50:36.610824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.863 [2024-04-26 20:50:36.705998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.863 [2024-04-26 20:50:38.713757] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:24.863 [2024-04-26 20:50:38.713819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.863 [2024-04-26 20:50:38.713833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.863 [2024-04-26 20:50:38.713846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.863 [2024-04-26 20:50:38.713855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.863 [2024-04-26 20:50:38.713864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.863 [2024-04-26 20:50:38.713871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.863 [2024-04-26 20:50:38.713880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.863 [2024-04-26 20:50:38.713888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.863 [2024-04-26 20:50:38.713896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.863 [2024-04-26 20:50:38.713943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.863 [2024-04-26 20:50:38.713966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:33:24.863 [2024-04-26 20:50:38.808715] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:24.863 Running I/O for 1 seconds... 00:33:24.863 00:33:24.863 Latency(us) 00:33:24.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.863 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:24.863 Verification LBA range: start 0x0 length 0x4000 00:33:24.863 NVMe0n1 : 1.00 17797.77 69.52 0.00 0.00 7166.34 875.25 8243.74 00:33:24.863 =================================================================================================================== 00:33:24.863 Total : 17797.77 69.52 0.00 0.00 7166.34 875.25 8243.74 00:33:24.863 20:50:42 -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:24.863 20:50:42 -- host/failover.sh@95 -- # grep -q NVMe0 00:33:24.863 20:50:43 -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:25.121 20:50:43 -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:25.121 20:50:43 -- host/failover.sh@99 -- # grep -q NVMe0 00:33:25.414 20:50:43 -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:25.414 20:50:43 -- host/failover.sh@101 -- # sleep 3 00:33:28.741 20:50:46 -- host/failover.sh@103 -- # grep -q NVMe0 00:33:28.741 20:50:46 -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:28.741 20:50:46 -- host/failover.sh@108 -- # killprocess 3751016 00:33:28.741 20:50:46 -- common/autotest_common.sh@926 -- # '[' -z 3751016 ']' 00:33:28.741 20:50:46 -- common/autotest_common.sh@930 -- # kill -0 3751016 00:33:28.741 20:50:46 -- common/autotest_common.sh@931 -- # uname 00:33:28.741 20:50:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:28.741 20:50:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3751016 00:33:28.741 20:50:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:28.741 20:50:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:28.741 20:50:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3751016' 00:33:28.741 killing process with pid 3751016 00:33:28.741 20:50:46 -- common/autotest_common.sh@945 -- # kill 3751016 00:33:28.741 20:50:46 -- common/autotest_common.sh@950 -- # wait 3751016 00:33:29.000 20:50:47 -- host/failover.sh@110 -- # sync 00:33:29.000 20:50:47 -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:29.260 20:50:47 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:29.260 20:50:47 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:29.260 20:50:47 -- host/failover.sh@116 -- # nvmftestfini 00:33:29.260 20:50:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:29.260 20:50:47 -- nvmf/common.sh@116 -- # sync 00:33:29.260 20:50:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:29.260 20:50:47 -- nvmf/common.sh@119 -- # set +e 00:33:29.260 20:50:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:29.260 20:50:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:29.260 rmmod nvme_tcp 00:33:29.260 rmmod nvme_fabrics 00:33:29.260 rmmod nvme_keyring 00:33:29.260 20:50:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:29.260 20:50:47 -- nvmf/common.sh@123 -- # set -e 00:33:29.260 20:50:47 -- nvmf/common.sh@124 -- # return 0 00:33:29.260 20:50:47 -- nvmf/common.sh@477 -- # '[' -n 3747364 ']' 00:33:29.260 20:50:47 -- nvmf/common.sh@478 -- # killprocess 3747364 00:33:29.260 20:50:47 -- common/autotest_common.sh@926 -- # '[' -z 3747364 ']' 00:33:29.260 20:50:47 -- common/autotest_common.sh@930 -- # kill -0 3747364 00:33:29.260 20:50:47 -- common/autotest_common.sh@931 -- # uname 00:33:29.260 20:50:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:29.260 20:50:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3747364 00:33:29.260 20:50:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:29.260 20:50:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:29.260 20:50:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3747364' 00:33:29.260 killing process with pid 3747364 00:33:29.260 20:50:47 -- common/autotest_common.sh@945 -- # kill 3747364 00:33:29.260 20:50:47 -- common/autotest_common.sh@950 -- # wait 3747364 00:33:29.827 20:50:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:29.827 20:50:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:29.827 20:50:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:29.827 20:50:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:29.827 20:50:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:29.827 20:50:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.827 20:50:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:29.827 20:50:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.738 20:50:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:31.738 00:33:31.738 real 0m37.649s 00:33:31.738 user 2m0.199s 00:33:31.738 sys 0m6.881s 00:33:31.738 20:50:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.738 20:50:50 -- common/autotest_common.sh@10 -- # set +x 00:33:31.738 ************************************ 00:33:31.738 END TEST nvmf_failover 00:33:31.738 ************************************ 00:33:31.738 20:50:50 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:31.738 20:50:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:31.738 20:50:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:31.738 20:50:50 -- common/autotest_common.sh@10 -- # set +x 00:33:31.738 ************************************ 00:33:31.738 START TEST nvmf_discovery 00:33:31.738 ************************************ 00:33:31.738 20:50:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:31.997 * Looking for test storage... 00:33:31.997 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:33:31.997 20:50:50 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.997 20:50:50 -- nvmf/common.sh@7 -- # uname -s 00:33:31.997 20:50:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.997 20:50:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.997 20:50:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.997 20:50:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.997 20:50:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.997 20:50:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.997 20:50:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.997 20:50:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.997 20:50:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.997 20:50:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.997 20:50:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:33:31.997 20:50:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:33:31.997 20:50:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.997 20:50:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.997 20:50:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:31.997 20:50:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:31.997 20:50:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.997 20:50:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.997 20:50:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.997 20:50:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.997 20:50:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.997 20:50:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.997 20:50:50 -- paths/export.sh@5 -- # export PATH 00:33:31.997 20:50:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.997 20:50:50 -- nvmf/common.sh@46 -- # : 0 00:33:31.997 20:50:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:31.997 20:50:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:31.997 20:50:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:31.997 20:50:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.997 20:50:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.997 20:50:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:31.997 20:50:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:31.997 20:50:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:31.997 20:50:50 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:31.997 20:50:50 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:31.997 20:50:50 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:31.997 20:50:50 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:31.997 20:50:50 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:31.997 20:50:50 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:31.997 20:50:50 -- host/discovery.sh@25 -- # nvmftestinit 00:33:31.997 20:50:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:31.997 20:50:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.997 20:50:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:31.997 20:50:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:31.997 20:50:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:31.997 20:50:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.997 20:50:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.997 20:50:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.997 20:50:50 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:33:31.997 20:50:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:31.997 20:50:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:31.997 20:50:50 -- common/autotest_common.sh@10 -- # set +x 00:33:37.276 20:50:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:37.276 20:50:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:37.276 20:50:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:37.276 20:50:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:37.276 20:50:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:37.276 20:50:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:37.276 20:50:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:37.276 20:50:55 -- nvmf/common.sh@294 -- # net_devs=() 00:33:37.276 20:50:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:37.276 20:50:55 -- nvmf/common.sh@295 -- # e810=() 00:33:37.276 20:50:55 -- nvmf/common.sh@295 -- # local -ga e810 00:33:37.276 20:50:55 -- nvmf/common.sh@296 -- # x722=() 00:33:37.276 20:50:55 -- nvmf/common.sh@296 -- # local -ga x722 00:33:37.276 20:50:55 -- nvmf/common.sh@297 -- # mlx=() 00:33:37.276 20:50:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:37.276 20:50:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:37.276 20:50:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:37.276 20:50:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:37.276 20:50:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:37.276 20:50:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:33:37.276 Found 0000:27:00.0 (0x8086 - 0x159b) 00:33:37.276 20:50:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:37.276 20:50:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:33:37.276 Found 0000:27:00.1 (0x8086 - 0x159b) 00:33:37.276 20:50:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:37.276 20:50:55 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:33:37.276 20:50:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:37.276 20:50:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.276 20:50:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:37.276 20:50:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.276 20:50:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:33:37.276 Found net devices under 0000:27:00.0: cvl_0_0 00:33:37.276 20:50:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.276 20:50:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:37.277 20:50:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.277 20:50:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:37.277 20:50:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.277 20:50:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:33:37.277 Found net devices under 0000:27:00.1: cvl_0_1 00:33:37.277 20:50:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.277 20:50:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:37.277 20:50:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:37.277 20:50:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:37.277 20:50:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:37.277 20:50:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:37.277 20:50:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:37.277 20:50:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:37.277 20:50:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:37.277 20:50:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:37.277 20:50:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:37.277 20:50:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:37.277 20:50:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:37.277 20:50:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:37.277 20:50:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:37.277 20:50:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:37.277 20:50:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:37.277 20:50:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:37.277 20:50:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:37.277 20:50:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:37.277 20:50:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:37.277 20:50:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:37.277 20:50:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.277 20:50:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:37.277 20:50:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:37.277 20:50:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:37.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:37.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:33:37.277 00:33:37.277 --- 10.0.0.2 ping statistics --- 00:33:37.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.277 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:33:37.277 20:50:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:37.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:37.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.435 ms 00:33:37.277 00:33:37.277 --- 10.0.0.1 ping statistics --- 00:33:37.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.277 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:33:37.277 20:50:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:37.277 20:50:55 -- nvmf/common.sh@410 -- # return 0 00:33:37.277 20:50:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:37.277 20:50:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:37.277 20:50:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:37.277 20:50:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:37.277 20:50:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:37.277 20:50:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:37.277 20:50:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:37.277 20:50:55 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:37.277 20:50:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:37.277 20:50:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:37.277 20:50:55 -- common/autotest_common.sh@10 -- # set +x 00:33:37.277 20:50:55 -- nvmf/common.sh@469 -- # nvmfpid=3757059 00:33:37.277 20:50:55 -- nvmf/common.sh@470 -- # waitforlisten 3757059 00:33:37.277 20:50:55 -- common/autotest_common.sh@819 -- # '[' -z 3757059 ']' 00:33:37.277 20:50:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.277 20:50:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:37.277 20:50:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.277 20:50:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:37.277 20:50:55 -- common/autotest_common.sh@10 -- # set +x 00:33:37.277 20:50:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:37.277 [2024-04-26 20:50:55.407051] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:37.277 [2024-04-26 20:50:55.407164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.277 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.277 [2024-04-26 20:50:55.529527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.538 [2024-04-26 20:50:55.626711] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:37.538 [2024-04-26 20:50:55.626889] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.538 [2024-04-26 20:50:55.626903] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.538 [2024-04-26 20:50:55.626913] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.538 [2024-04-26 20:50:55.626939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.797 20:50:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:37.797 20:50:56 -- common/autotest_common.sh@852 -- # return 0 00:33:37.797 20:50:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:37.797 20:50:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:37.797 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:37.797 20:50:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.797 20:50:56 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:37.797 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.797 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:37.797 [2024-04-26 20:50:56.128638] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.797 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.797 20:50:56 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:37.797 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.797 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:37.797 [2024-04-26 20:50:56.136816] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:38.056 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.056 20:50:56 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:38.056 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.056 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:38.056 null0 00:33:38.056 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.056 20:50:56 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:38.056 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.056 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:38.056 null1 00:33:38.056 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.056 20:50:56 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:38.056 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.056 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:38.056 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.056 20:50:56 -- host/discovery.sh@45 -- # hostpid=3757262 00:33:38.056 20:50:56 -- host/discovery.sh@46 -- # waitforlisten 3757262 /tmp/host.sock 00:33:38.056 20:50:56 -- common/autotest_common.sh@819 -- # '[' -z 3757262 ']' 00:33:38.056 20:50:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:33:38.056 20:50:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:38.056 20:50:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:38.056 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:38.056 20:50:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:38.056 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:38.056 20:50:56 -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:38.056 [2024-04-26 20:50:56.238474] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:38.056 [2024-04-26 20:50:56.238582] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757262 ] 00:33:38.056 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.056 [2024-04-26 20:50:56.349994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.315 [2024-04-26 20:50:56.439670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:38.315 [2024-04-26 20:50:56.439848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.884 20:50:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:38.884 20:50:56 -- common/autotest_common.sh@852 -- # return 0 00:33:38.884 20:50:56 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:38.884 20:50:56 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:38.884 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:56 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:38.884 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:56 -- host/discovery.sh@72 -- # notify_id=0 00:33:38.884 20:50:56 -- host/discovery.sh@78 -- # get_subsystem_names 00:33:38.884 20:50:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:38.884 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:38.884 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:56 -- host/discovery.sh@59 -- # sort 00:33:38.884 20:50:56 -- host/discovery.sh@59 -- # xargs 00:33:38.884 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:56 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:33:38.884 20:50:56 -- host/discovery.sh@79 -- # get_bdev_list 00:33:38.884 20:50:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.884 20:50:56 -- host/discovery.sh@55 -- # xargs 00:33:38.884 20:50:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:38.884 20:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:56 -- host/discovery.sh@55 -- # sort 00:33:38.884 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:57 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:33:38.884 20:50:57 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:38.884 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:57 -- host/discovery.sh@82 -- # get_subsystem_names 00:33:38.884 20:50:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:38.884 20:50:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:38.884 20:50:57 -- host/discovery.sh@59 -- # sort 00:33:38.884 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:57 -- host/discovery.sh@59 -- # xargs 00:33:38.884 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:57 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:33:38.884 20:50:57 -- host/discovery.sh@83 -- # get_bdev_list 00:33:38.884 20:50:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.884 20:50:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:38.884 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:57 -- host/discovery.sh@55 -- # sort 00:33:38.884 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:57 -- host/discovery.sh@55 -- # xargs 00:33:38.884 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:57 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:38.884 20:50:57 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:38.884 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:57 -- host/discovery.sh@86 -- # get_subsystem_names 00:33:38.884 20:50:57 -- host/discovery.sh@59 -- # xargs 00:33:38.884 20:50:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:38.884 20:50:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:38.884 20:50:57 -- host/discovery.sh@59 -- # sort 00:33:38.884 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:38.884 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.884 20:50:57 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:33:38.884 20:50:57 -- host/discovery.sh@87 -- # get_bdev_list 00:33:38.884 20:50:57 -- host/discovery.sh@55 -- # sort 00:33:38.884 20:50:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:38.884 20:50:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.884 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.884 20:50:57 -- host/discovery.sh@55 -- # xargs 00:33:38.885 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:38.885 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.885 20:50:57 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:38.885 20:50:57 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:38.885 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.885 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:38.885 [2024-04-26 20:50:57.201062] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.885 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.885 20:50:57 -- host/discovery.sh@92 -- # get_subsystem_names 00:33:38.885 20:50:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:38.885 20:50:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:38.885 20:50:57 -- host/discovery.sh@59 -- # xargs 00:33:38.885 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.885 20:50:57 -- host/discovery.sh@59 -- # sort 00:33:38.885 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:38.885 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.146 20:50:57 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:39.146 20:50:57 -- host/discovery.sh@93 -- # get_bdev_list 00:33:39.146 20:50:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.146 20:50:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:39.146 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.146 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:39.146 20:50:57 -- host/discovery.sh@55 -- # sort 00:33:39.146 20:50:57 -- host/discovery.sh@55 -- # xargs 00:33:39.146 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.146 20:50:57 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:33:39.146 20:50:57 -- host/discovery.sh@94 -- # get_notification_count 00:33:39.146 20:50:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:39.146 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.146 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:39.146 20:50:57 -- host/discovery.sh@74 -- # jq '. | length' 00:33:39.146 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.146 20:50:57 -- host/discovery.sh@74 -- # notification_count=0 00:33:39.146 20:50:57 -- host/discovery.sh@75 -- # notify_id=0 00:33:39.146 20:50:57 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:33:39.146 20:50:57 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:39.146 20:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.146 20:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:39.146 20:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.146 20:50:57 -- host/discovery.sh@100 -- # sleep 1 00:33:39.717 [2024-04-26 20:50:57.984883] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:39.717 [2024-04-26 20:50:57.984917] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:39.717 [2024-04-26 20:50:57.984946] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:39.977 [2024-04-26 20:50:58.072987] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:39.977 [2024-04-26 20:50:58.300681] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:39.977 [2024-04-26 20:50:58.300710] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:40.236 20:50:58 -- host/discovery.sh@101 -- # get_subsystem_names 00:33:40.236 20:50:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:40.236 20:50:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:40.236 20:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:40.236 20:50:58 -- host/discovery.sh@59 -- # sort 00:33:40.236 20:50:58 -- host/discovery.sh@59 -- # xargs 00:33:40.236 20:50:58 -- common/autotest_common.sh@10 -- # set +x 00:33:40.236 20:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@102 -- # get_bdev_list 00:33:40.236 20:50:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.236 20:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:40.236 20:50:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:40.236 20:50:58 -- common/autotest_common.sh@10 -- # set +x 00:33:40.236 20:50:58 -- host/discovery.sh@55 -- # xargs 00:33:40.236 20:50:58 -- host/discovery.sh@55 -- # sort 00:33:40.236 20:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:33:40.236 20:50:58 -- host/discovery.sh@63 -- # xargs 00:33:40.236 20:50:58 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:40.236 20:50:58 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:40.236 20:50:58 -- host/discovery.sh@63 -- # sort -n 00:33:40.236 20:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:40.236 20:50:58 -- common/autotest_common.sh@10 -- # set +x 00:33:40.236 20:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@104 -- # get_notification_count 00:33:40.236 20:50:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:40.236 20:50:58 -- host/discovery.sh@74 -- # jq '. | length' 00:33:40.236 20:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:40.236 20:50:58 -- common/autotest_common.sh@10 -- # set +x 00:33:40.236 20:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@74 -- # notification_count=1 00:33:40.236 20:50:58 -- host/discovery.sh@75 -- # notify_id=1 00:33:40.236 20:50:58 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:40.236 20:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:40.236 20:50:58 -- common/autotest_common.sh@10 -- # set +x 00:33:40.236 20:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:40.236 20:50:58 -- host/discovery.sh@109 -- # sleep 1 00:33:41.172 20:50:59 -- host/discovery.sh@110 -- # get_bdev_list 00:33:41.172 20:50:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.172 20:50:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.172 20:50:59 -- host/discovery.sh@55 -- # xargs 00:33:41.172 20:50:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:41.172 20:50:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.172 20:50:59 -- host/discovery.sh@55 -- # sort 00:33:41.433 20:50:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.433 20:50:59 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:41.433 20:50:59 -- host/discovery.sh@111 -- # get_notification_count 00:33:41.433 20:50:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:41.433 20:50:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.433 20:50:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.433 20:50:59 -- host/discovery.sh@74 -- # jq '. | length' 00:33:41.433 20:50:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.433 20:50:59 -- host/discovery.sh@74 -- # notification_count=1 00:33:41.433 20:50:59 -- host/discovery.sh@75 -- # notify_id=2 00:33:41.433 20:50:59 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:33:41.433 20:50:59 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:41.433 20:50:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.433 20:50:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.433 [2024-04-26 20:50:59.590255] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:41.433 [2024-04-26 20:50:59.591323] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:41.433 [2024-04-26 20:50:59.591359] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:41.433 20:50:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.433 20:50:59 -- host/discovery.sh@117 -- # sleep 1 00:33:41.433 [2024-04-26 20:50:59.720427] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:41.694 [2024-04-26 20:50:59.825942] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:41.694 [2024-04-26 20:50:59.825968] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:41.694 [2024-04-26 20:50:59.825978] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:42.263 20:51:00 -- host/discovery.sh@118 -- # get_subsystem_names 00:33:42.264 20:51:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:42.264 20:51:00 -- host/discovery.sh@59 -- # xargs 00:33:42.264 20:51:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:42.264 20:51:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.264 20:51:00 -- host/discovery.sh@59 -- # sort 00:33:42.264 20:51:00 -- common/autotest_common.sh@10 -- # set +x 00:33:42.523 20:51:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@119 -- # get_bdev_list 00:33:42.523 20:51:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.523 20:51:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.523 20:51:00 -- common/autotest_common.sh@10 -- # set +x 00:33:42.523 20:51:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:42.523 20:51:00 -- host/discovery.sh@55 -- # sort 00:33:42.523 20:51:00 -- host/discovery.sh@55 -- # xargs 00:33:42.523 20:51:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:33:42.523 20:51:00 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:42.523 20:51:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.523 20:51:00 -- common/autotest_common.sh@10 -- # set +x 00:33:42.523 20:51:00 -- host/discovery.sh@63 -- # xargs 00:33:42.523 20:51:00 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:42.523 20:51:00 -- host/discovery.sh@63 -- # sort -n 00:33:42.523 20:51:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@121 -- # get_notification_count 00:33:42.523 20:51:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:42.523 20:51:00 -- host/discovery.sh@74 -- # jq '. | length' 00:33:42.523 20:51:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.523 20:51:00 -- common/autotest_common.sh@10 -- # set +x 00:33:42.523 20:51:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@74 -- # notification_count=0 00:33:42.523 20:51:00 -- host/discovery.sh@75 -- # notify_id=2 00:33:42.523 20:51:00 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:42.523 20:51:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.523 20:51:00 -- common/autotest_common.sh@10 -- # set +x 00:33:42.523 [2024-04-26 20:51:00.759840] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:42.523 [2024-04-26 20:51:00.759880] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:42.523 20:51:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.523 20:51:00 -- host/discovery.sh@127 -- # sleep 1 00:33:42.523 [2024-04-26 20:51:00.768728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.523 [2024-04-26 20:51:00.768755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.523 [2024-04-26 20:51:00.768768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.523 [2024-04-26 20:51:00.768777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.523 [2024-04-26 20:51:00.768786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.523 [2024-04-26 20:51:00.768795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.523 [2024-04-26 20:51:00.768804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.523 [2024-04-26 20:51:00.768812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.523 [2024-04-26 20:51:00.768821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:42.523 [2024-04-26 20:51:00.778713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:42.523 [2024-04-26 20:51:00.788727] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:42.523 [2024-04-26 20:51:00.789136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.523 [2024-04-26 20:51:00.789529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.523 [2024-04-26 20:51:00.789542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:42.523 [2024-04-26 20:51:00.789553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:42.523 [2024-04-26 20:51:00.789567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:42.523 [2024-04-26 20:51:00.789585] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.523 [2024-04-26 20:51:00.789594] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.523 [2024-04-26 20:51:00.789605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.523 [2024-04-26 20:51:00.789625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.523 [2024-04-26 20:51:00.798773] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:42.523 [2024-04-26 20:51:00.799256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.523 [2024-04-26 20:51:00.799573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.799584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:42.524 [2024-04-26 20:51:00.799593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:42.524 [2024-04-26 20:51:00.799605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:42.524 [2024-04-26 20:51:00.799623] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.524 [2024-04-26 20:51:00.799631] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.524 [2024-04-26 20:51:00.799639] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.524 [2024-04-26 20:51:00.799656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.524 [2024-04-26 20:51:00.808813] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:42.524 [2024-04-26 20:51:00.809304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.809742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.809754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:42.524 [2024-04-26 20:51:00.809764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:42.524 [2024-04-26 20:51:00.809778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:42.524 [2024-04-26 20:51:00.809796] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.524 [2024-04-26 20:51:00.809804] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.524 [2024-04-26 20:51:00.809813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.524 [2024-04-26 20:51:00.809826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.524 [2024-04-26 20:51:00.818852] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:42.524 [2024-04-26 20:51:00.819294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.819773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.819784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:42.524 [2024-04-26 20:51:00.819794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:42.524 [2024-04-26 20:51:00.819806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:42.524 [2024-04-26 20:51:00.819825] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.524 [2024-04-26 20:51:00.819834] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.524 [2024-04-26 20:51:00.819842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.524 [2024-04-26 20:51:00.819855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.524 [2024-04-26 20:51:00.828892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:42.524 [2024-04-26 20:51:00.829144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.829389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.829400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:42.524 [2024-04-26 20:51:00.829410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:42.524 [2024-04-26 20:51:00.829423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:42.524 [2024-04-26 20:51:00.829436] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.524 [2024-04-26 20:51:00.829445] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.524 [2024-04-26 20:51:00.829454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.524 [2024-04-26 20:51:00.829467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.524 [2024-04-26 20:51:00.838928] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:42.524 [2024-04-26 20:51:00.839231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.839577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.524 [2024-04-26 20:51:00.839589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:42.524 [2024-04-26 20:51:00.839598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:42.524 [2024-04-26 20:51:00.839612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:42.524 [2024-04-26 20:51:00.839625] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.524 [2024-04-26 20:51:00.839633] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.524 [2024-04-26 20:51:00.839642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.524 [2024-04-26 20:51:00.839655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.524 [2024-04-26 20:51:00.847873] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:42.524 [2024-04-26 20:51:00.847901] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:43.461 20:51:01 -- host/discovery.sh@128 -- # get_subsystem_names 00:33:43.461 20:51:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:43.461 20:51:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:43.461 20:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.461 20:51:01 -- host/discovery.sh@59 -- # sort 00:33:43.461 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:33:43.461 20:51:01 -- host/discovery.sh@59 -- # xargs 00:33:43.461 20:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@129 -- # get_bdev_list 00:33:43.721 20:51:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.721 20:51:01 -- host/discovery.sh@55 -- # xargs 00:33:43.721 20:51:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:43.721 20:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.721 20:51:01 -- host/discovery.sh@55 -- # sort 00:33:43.721 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:33:43.721 20:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:33:43.721 20:51:01 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:43.721 20:51:01 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:43.721 20:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.721 20:51:01 -- host/discovery.sh@63 -- # sort -n 00:33:43.721 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:33:43.721 20:51:01 -- host/discovery.sh@63 -- # xargs 00:33:43.721 20:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@131 -- # get_notification_count 00:33:43.721 20:51:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:43.721 20:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.721 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:33:43.721 20:51:01 -- host/discovery.sh@74 -- # jq '. | length' 00:33:43.721 20:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@74 -- # notification_count=0 00:33:43.721 20:51:01 -- host/discovery.sh@75 -- # notify_id=2 00:33:43.721 20:51:01 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:43.721 20:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.721 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:33:43.721 20:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.721 20:51:01 -- host/discovery.sh@135 -- # sleep 1 00:33:44.728 20:51:02 -- host/discovery.sh@136 -- # get_subsystem_names 00:33:44.728 20:51:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:44.728 20:51:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:44.728 20:51:02 -- host/discovery.sh@59 -- # xargs 00:33:44.728 20:51:02 -- host/discovery.sh@59 -- # sort 00:33:44.728 20:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.728 20:51:02 -- common/autotest_common.sh@10 -- # set +x 00:33:44.728 20:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.728 20:51:02 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:33:44.728 20:51:02 -- host/discovery.sh@137 -- # get_bdev_list 00:33:44.728 20:51:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.728 20:51:02 -- host/discovery.sh@55 -- # xargs 00:33:44.728 20:51:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:44.728 20:51:02 -- host/discovery.sh@55 -- # sort 00:33:44.728 20:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.728 20:51:02 -- common/autotest_common.sh@10 -- # set +x 00:33:44.728 20:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.728 20:51:03 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:33:44.728 20:51:03 -- host/discovery.sh@138 -- # get_notification_count 00:33:44.728 20:51:03 -- host/discovery.sh@74 -- # jq '. | length' 00:33:44.728 20:51:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:44.728 20:51:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.728 20:51:03 -- common/autotest_common.sh@10 -- # set +x 00:33:44.728 20:51:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.728 20:51:03 -- host/discovery.sh@74 -- # notification_count=2 00:33:44.728 20:51:03 -- host/discovery.sh@75 -- # notify_id=4 00:33:44.728 20:51:03 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:33:44.728 20:51:03 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:44.728 20:51:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.728 20:51:03 -- common/autotest_common.sh@10 -- # set +x 00:33:46.104 [2024-04-26 20:51:04.105836] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:46.104 [2024-04-26 20:51:04.105864] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:46.104 [2024-04-26 20:51:04.105889] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:46.104 [2024-04-26 20:51:04.193936] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:46.364 [2024-04-26 20:51:04.468755] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:46.364 [2024-04-26 20:51:04.468799] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:46.364 20:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.364 20:51:04 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:46.364 20:51:04 -- common/autotest_common.sh@640 -- # local es=0 00:33:46.364 20:51:04 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:46.364 20:51:04 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:33:46.364 20:51:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:46.364 20:51:04 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:33:46.364 20:51:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:46.364 20:51:04 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:46.364 20:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.364 20:51:04 -- common/autotest_common.sh@10 -- # set +x 00:33:46.364 request: 00:33:46.364 { 00:33:46.364 "name": "nvme", 00:33:46.364 "trtype": "tcp", 00:33:46.364 "traddr": "10.0.0.2", 00:33:46.364 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:46.364 "adrfam": "ipv4", 00:33:46.364 "trsvcid": "8009", 00:33:46.364 "wait_for_attach": true, 00:33:46.364 "method": "bdev_nvme_start_discovery", 00:33:46.364 "req_id": 1 00:33:46.364 } 00:33:46.364 Got JSON-RPC error response 00:33:46.364 response: 00:33:46.364 { 00:33:46.364 "code": -17, 00:33:46.364 "message": "File exists" 00:33:46.364 } 00:33:46.364 20:51:04 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:33:46.364 20:51:04 -- common/autotest_common.sh@643 -- # es=1 00:33:46.364 20:51:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:46.364 20:51:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:33:46.364 20:51:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:46.364 20:51:04 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:33:46.364 20:51:04 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:46.364 20:51:04 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:46.364 20:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.364 20:51:04 -- host/discovery.sh@67 -- # sort 00:33:46.364 20:51:04 -- common/autotest_common.sh@10 -- # set +x 00:33:46.364 20:51:04 -- host/discovery.sh@67 -- # xargs 00:33:46.364 20:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.364 20:51:04 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:33:46.364 20:51:04 -- host/discovery.sh@147 -- # get_bdev_list 00:33:46.364 20:51:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:46.364 20:51:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:46.364 20:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.364 20:51:04 -- common/autotest_common.sh@10 -- # set +x 00:33:46.364 20:51:04 -- host/discovery.sh@55 -- # sort 00:33:46.364 20:51:04 -- host/discovery.sh@55 -- # xargs 00:33:46.364 20:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.364 20:51:04 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:46.364 20:51:04 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:46.364 20:51:04 -- common/autotest_common.sh@640 -- # local es=0 00:33:46.364 20:51:04 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:46.364 20:51:04 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:33:46.364 20:51:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:46.364 20:51:04 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:33:46.364 20:51:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:46.364 20:51:04 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:46.364 20:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.364 20:51:04 -- common/autotest_common.sh@10 -- # set +x 00:33:46.364 request: 00:33:46.364 { 00:33:46.364 "name": "nvme_second", 00:33:46.364 "trtype": "tcp", 00:33:46.364 "traddr": "10.0.0.2", 00:33:46.364 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:46.364 "adrfam": "ipv4", 00:33:46.364 "trsvcid": "8009", 00:33:46.364 "wait_for_attach": true, 00:33:46.364 "method": "bdev_nvme_start_discovery", 00:33:46.364 "req_id": 1 00:33:46.364 } 00:33:46.364 Got JSON-RPC error response 00:33:46.364 response: 00:33:46.364 { 00:33:46.364 "code": -17, 00:33:46.364 "message": "File exists" 00:33:46.364 } 00:33:46.364 20:51:04 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:33:46.364 20:51:04 -- common/autotest_common.sh@643 -- # es=1 00:33:46.364 20:51:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:46.364 20:51:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:33:46.364 20:51:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:46.364 20:51:04 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:33:46.364 20:51:04 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:46.364 20:51:04 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:46.364 20:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.364 20:51:04 -- host/discovery.sh@67 -- # sort 00:33:46.364 20:51:04 -- common/autotest_common.sh@10 -- # set +x 00:33:46.364 20:51:04 -- host/discovery.sh@67 -- # xargs 00:33:46.364 20:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.364 20:51:04 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:33:46.364 20:51:04 -- host/discovery.sh@153 -- # get_bdev_list 00:33:46.364 20:51:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:46.364 20:51:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:46.364 20:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.364 20:51:04 -- common/autotest_common.sh@10 -- # set +x 00:33:46.364 20:51:04 -- host/discovery.sh@55 -- # sort 00:33:46.364 20:51:04 -- host/discovery.sh@55 -- # xargs 00:33:46.364 20:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.364 20:51:04 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:46.365 20:51:04 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:46.365 20:51:04 -- common/autotest_common.sh@640 -- # local es=0 00:33:46.365 20:51:04 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:46.365 20:51:04 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:33:46.365 20:51:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:46.365 20:51:04 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:33:46.365 20:51:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:46.365 20:51:04 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:46.365 20:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.365 20:51:04 -- common/autotest_common.sh@10 -- # set +x 00:33:47.743 [2024-04-26 20:51:05.681894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.743 [2024-04-26 20:51:05.682229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.743 [2024-04-26 20:51:05.682244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000006240 with addr=10.0.0.2, port=8010 00:33:47.743 [2024-04-26 20:51:05.682272] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:47.743 [2024-04-26 20:51:05.682282] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:47.743 [2024-04-26 20:51:05.682293] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:48.686 [2024-04-26 20:51:06.681784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.686 [2024-04-26 20:51:06.681959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.686 [2024-04-26 20:51:06.681970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000006400 with addr=10.0.0.2, port=8010 00:33:48.686 [2024-04-26 20:51:06.682000] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:48.686 [2024-04-26 20:51:06.682008] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:48.686 [2024-04-26 20:51:06.682017] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:49.621 [2024-04-26 20:51:07.681349] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:49.621 request: 00:33:49.621 { 00:33:49.621 "name": "nvme_second", 00:33:49.621 "trtype": "tcp", 00:33:49.621 "traddr": "10.0.0.2", 00:33:49.621 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:49.621 "adrfam": "ipv4", 00:33:49.621 "trsvcid": "8010", 00:33:49.621 "attach_timeout_ms": 3000, 00:33:49.621 "method": "bdev_nvme_start_discovery", 00:33:49.621 "req_id": 1 00:33:49.621 } 00:33:49.621 Got JSON-RPC error response 00:33:49.621 response: 00:33:49.621 { 00:33:49.621 "code": -110, 00:33:49.621 "message": "Connection timed out" 00:33:49.621 } 00:33:49.622 20:51:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:33:49.622 20:51:07 -- common/autotest_common.sh@643 -- # es=1 00:33:49.622 20:51:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:49.622 20:51:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:33:49.622 20:51:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:49.622 20:51:07 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:33:49.622 20:51:07 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:49.622 20:51:07 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:49.622 20:51:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:49.622 20:51:07 -- host/discovery.sh@67 -- # sort 00:33:49.622 20:51:07 -- host/discovery.sh@67 -- # xargs 00:33:49.622 20:51:07 -- common/autotest_common.sh@10 -- # set +x 00:33:49.622 20:51:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:49.622 20:51:07 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:33:49.622 20:51:07 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:33:49.622 20:51:07 -- host/discovery.sh@162 -- # kill 3757262 00:33:49.622 20:51:07 -- host/discovery.sh@163 -- # nvmftestfini 00:33:49.622 20:51:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:49.622 20:51:07 -- nvmf/common.sh@116 -- # sync 00:33:49.622 20:51:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:49.622 20:51:07 -- nvmf/common.sh@119 -- # set +e 00:33:49.622 20:51:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:49.622 20:51:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:49.622 rmmod nvme_tcp 00:33:49.622 rmmod nvme_fabrics 00:33:49.622 rmmod nvme_keyring 00:33:49.622 20:51:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:49.622 20:51:07 -- nvmf/common.sh@123 -- # set -e 00:33:49.622 20:51:07 -- nvmf/common.sh@124 -- # return 0 00:33:49.622 20:51:07 -- nvmf/common.sh@477 -- # '[' -n 3757059 ']' 00:33:49.622 20:51:07 -- nvmf/common.sh@478 -- # killprocess 3757059 00:33:49.622 20:51:07 -- common/autotest_common.sh@926 -- # '[' -z 3757059 ']' 00:33:49.622 20:51:07 -- common/autotest_common.sh@930 -- # kill -0 3757059 00:33:49.622 20:51:07 -- common/autotest_common.sh@931 -- # uname 00:33:49.622 20:51:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:49.622 20:51:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3757059 00:33:49.622 20:51:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:49.622 20:51:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:49.622 20:51:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3757059' 00:33:49.622 killing process with pid 3757059 00:33:49.622 20:51:07 -- common/autotest_common.sh@945 -- # kill 3757059 00:33:49.622 20:51:07 -- common/autotest_common.sh@950 -- # wait 3757059 00:33:50.190 20:51:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:50.190 20:51:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:50.190 20:51:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:50.190 20:51:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:50.190 20:51:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:50.190 20:51:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.190 20:51:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:50.190 20:51:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.098 20:51:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:52.098 00:33:52.098 real 0m20.236s 00:33:52.098 user 0m27.193s 00:33:52.098 sys 0m5.107s 00:33:52.098 20:51:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.098 20:51:10 -- common/autotest_common.sh@10 -- # set +x 00:33:52.098 ************************************ 00:33:52.098 END TEST nvmf_discovery 00:33:52.098 ************************************ 00:33:52.098 20:51:10 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:52.098 20:51:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:52.098 20:51:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:52.098 20:51:10 -- common/autotest_common.sh@10 -- # set +x 00:33:52.098 ************************************ 00:33:52.098 START TEST nvmf_discovery_remove_ifc 00:33:52.098 ************************************ 00:33:52.098 20:51:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:52.098 * Looking for test storage... 00:33:52.098 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:33:52.099 20:51:10 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.099 20:51:10 -- nvmf/common.sh@7 -- # uname -s 00:33:52.099 20:51:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.099 20:51:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.099 20:51:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.099 20:51:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.099 20:51:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.099 20:51:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.099 20:51:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.099 20:51:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.099 20:51:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.099 20:51:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.099 20:51:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:33:52.099 20:51:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:33:52.099 20:51:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.099 20:51:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.099 20:51:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:52.099 20:51:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:52.099 20:51:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.099 20:51:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.099 20:51:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.099 20:51:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.099 20:51:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.099 20:51:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.099 20:51:10 -- paths/export.sh@5 -- # export PATH 00:33:52.099 20:51:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.099 20:51:10 -- nvmf/common.sh@46 -- # : 0 00:33:52.099 20:51:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:52.099 20:51:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:52.099 20:51:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:52.099 20:51:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.099 20:51:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.099 20:51:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:52.099 20:51:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:52.099 20:51:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:52.099 20:51:10 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:52.099 20:51:10 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:52.099 20:51:10 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:52.099 20:51:10 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:52.099 20:51:10 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:52.099 20:51:10 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:52.099 20:51:10 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:52.099 20:51:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:52.099 20:51:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.099 20:51:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:52.099 20:51:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:52.099 20:51:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:52.099 20:51:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.099 20:51:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:52.099 20:51:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.099 20:51:10 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:33:52.099 20:51:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:52.099 20:51:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:52.099 20:51:10 -- common/autotest_common.sh@10 -- # set +x 00:33:57.378 20:51:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:57.378 20:51:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:57.378 20:51:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:57.378 20:51:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:57.378 20:51:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:57.378 20:51:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:57.378 20:51:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:57.378 20:51:15 -- nvmf/common.sh@294 -- # net_devs=() 00:33:57.378 20:51:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:57.378 20:51:15 -- nvmf/common.sh@295 -- # e810=() 00:33:57.378 20:51:15 -- nvmf/common.sh@295 -- # local -ga e810 00:33:57.378 20:51:15 -- nvmf/common.sh@296 -- # x722=() 00:33:57.378 20:51:15 -- nvmf/common.sh@296 -- # local -ga x722 00:33:57.378 20:51:15 -- nvmf/common.sh@297 -- # mlx=() 00:33:57.378 20:51:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:57.378 20:51:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.378 20:51:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:57.378 20:51:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:57.378 20:51:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:57.378 20:51:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:33:57.378 Found 0000:27:00.0 (0x8086 - 0x159b) 00:33:57.378 20:51:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:57.378 20:51:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:33:57.378 Found 0000:27:00.1 (0x8086 - 0x159b) 00:33:57.378 20:51:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:57.378 20:51:15 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:57.378 20:51:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.378 20:51:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:57.378 20:51:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.378 20:51:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:33:57.378 Found net devices under 0000:27:00.0: cvl_0_0 00:33:57.378 20:51:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.378 20:51:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:57.378 20:51:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.378 20:51:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:57.378 20:51:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.378 20:51:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:33:57.378 Found net devices under 0000:27:00.1: cvl_0_1 00:33:57.378 20:51:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.378 20:51:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:57.378 20:51:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:57.378 20:51:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:57.378 20:51:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:57.378 20:51:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.378 20:51:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.378 20:51:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.378 20:51:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:57.378 20:51:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.378 20:51:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.378 20:51:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:57.378 20:51:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.378 20:51:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.378 20:51:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:57.378 20:51:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:57.378 20:51:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.378 20:51:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.378 20:51:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.378 20:51:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.378 20:51:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:57.378 20:51:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.640 20:51:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.640 20:51:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.640 20:51:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:57.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:33:57.640 00:33:57.640 --- 10.0.0.2 ping statistics --- 00:33:57.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.640 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:33:57.640 20:51:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:33:57.640 00:33:57.640 --- 10.0.0.1 ping statistics --- 00:33:57.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.640 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:33:57.640 20:51:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.640 20:51:15 -- nvmf/common.sh@410 -- # return 0 00:33:57.640 20:51:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:57.640 20:51:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:57.640 20:51:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:57.640 20:51:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:57.640 20:51:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:57.640 20:51:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:57.640 20:51:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:57.640 20:51:15 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:57.640 20:51:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:57.640 20:51:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:57.640 20:51:15 -- common/autotest_common.sh@10 -- # set +x 00:33:57.640 20:51:15 -- nvmf/common.sh@469 -- # nvmfpid=3764087 00:33:57.640 20:51:15 -- nvmf/common.sh@470 -- # waitforlisten 3764087 00:33:57.640 20:51:15 -- common/autotest_common.sh@819 -- # '[' -z 3764087 ']' 00:33:57.640 20:51:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.640 20:51:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:57.640 20:51:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.640 20:51:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:57.640 20:51:15 -- common/autotest_common.sh@10 -- # set +x 00:33:57.640 20:51:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:57.640 [2024-04-26 20:51:15.888276] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:57.640 [2024-04-26 20:51:15.888403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.640 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.900 [2024-04-26 20:51:16.023118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.900 [2024-04-26 20:51:16.125235] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:57.900 [2024-04-26 20:51:16.125421] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.900 [2024-04-26 20:51:16.125435] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.900 [2024-04-26 20:51:16.125444] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.900 [2024-04-26 20:51:16.125480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.467 20:51:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:58.467 20:51:16 -- common/autotest_common.sh@852 -- # return 0 00:33:58.467 20:51:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:58.467 20:51:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:58.467 20:51:16 -- common/autotest_common.sh@10 -- # set +x 00:33:58.467 20:51:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.467 20:51:16 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:58.467 20:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.467 20:51:16 -- common/autotest_common.sh@10 -- # set +x 00:33:58.467 [2024-04-26 20:51:16.649177] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:58.467 [2024-04-26 20:51:16.657370] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:58.467 null0 00:33:58.467 [2024-04-26 20:51:16.689263] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.467 20:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.467 20:51:16 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3764405 00:33:58.467 20:51:16 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3764405 /tmp/host.sock 00:33:58.467 20:51:16 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:58.467 20:51:16 -- common/autotest_common.sh@819 -- # '[' -z 3764405 ']' 00:33:58.467 20:51:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:33:58.467 20:51:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:58.467 20:51:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:58.467 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:58.467 20:51:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:58.467 20:51:16 -- common/autotest_common.sh@10 -- # set +x 00:33:58.467 [2024-04-26 20:51:16.782778] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:58.467 [2024-04-26 20:51:16.782884] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764405 ] 00:33:58.726 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.726 [2024-04-26 20:51:16.894321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.726 [2024-04-26 20:51:16.988299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:58.726 [2024-04-26 20:51:16.988493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.295 20:51:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:59.295 20:51:17 -- common/autotest_common.sh@852 -- # return 0 00:33:59.295 20:51:17 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:59.295 20:51:17 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:59.295 20:51:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.295 20:51:17 -- common/autotest_common.sh@10 -- # set +x 00:33:59.295 20:51:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.295 20:51:17 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:59.295 20:51:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.295 20:51:17 -- common/autotest_common.sh@10 -- # set +x 00:33:59.554 20:51:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.554 20:51:17 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:59.554 20:51:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.554 20:51:17 -- common/autotest_common.sh@10 -- # set +x 00:34:00.489 [2024-04-26 20:51:18.706830] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:00.489 [2024-04-26 20:51:18.706862] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:00.489 [2024-04-26 20:51:18.706881] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:00.747 [2024-04-26 20:51:18.833972] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:00.747 [2024-04-26 20:51:19.059257] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:00.747 [2024-04-26 20:51:19.059315] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:00.747 [2024-04-26 20:51:19.059352] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:00.747 [2024-04-26 20:51:19.059375] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:00.747 [2024-04-26 20:51:19.059405] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:00.747 20:51:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.747 20:51:19 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:00.747 [2024-04-26 20:51:19.062172] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x613000003f40 was disconnected and freed. delete nvme_qpair. 00:34:00.747 20:51:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:00.747 20:51:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.747 20:51:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.747 20:51:19 -- common/autotest_common.sh@10 -- # set +x 00:34:00.747 20:51:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:00.747 20:51:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:00.747 20:51:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:00.747 20:51:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:01.004 20:51:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:01.004 20:51:19 -- common/autotest_common.sh@10 -- # set +x 00:34:01.004 20:51:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:01.004 20:51:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:01.978 20:51:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:01.978 20:51:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.978 20:51:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:01.978 20:51:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.978 20:51:20 -- common/autotest_common.sh@10 -- # set +x 00:34:01.978 20:51:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:01.978 20:51:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:01.978 20:51:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.978 20:51:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:01.978 20:51:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:02.955 20:51:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:02.955 20:51:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:02.955 20:51:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.955 20:51:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:02.955 20:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:02.955 20:51:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:02.955 20:51:21 -- common/autotest_common.sh@10 -- # set +x 00:34:03.213 20:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.213 20:51:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:03.213 20:51:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:04.154 20:51:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:04.154 20:51:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.154 20:51:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:04.154 20:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:04.154 20:51:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:04.154 20:51:22 -- common/autotest_common.sh@10 -- # set +x 00:34:04.154 20:51:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:04.154 20:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:04.154 20:51:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:04.154 20:51:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:05.093 20:51:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:05.093 20:51:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:05.093 20:51:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.093 20:51:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:05.093 20:51:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:05.093 20:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.093 20:51:23 -- common/autotest_common.sh@10 -- # set +x 00:34:05.093 20:51:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.093 20:51:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:05.093 20:51:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:06.472 20:51:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:06.472 20:51:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:06.472 20:51:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.472 20:51:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:06.472 20:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:06.472 20:51:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:06.472 20:51:24 -- common/autotest_common.sh@10 -- # set +x 00:34:06.472 20:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:06.472 20:51:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:06.472 20:51:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:06.472 [2024-04-26 20:51:24.487070] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:06.472 [2024-04-26 20:51:24.487141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.472 [2024-04-26 20:51:24.487157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.472 [2024-04-26 20:51:24.487171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.472 [2024-04-26 20:51:24.487179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.472 [2024-04-26 20:51:24.487188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.472 [2024-04-26 20:51:24.487196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.472 [2024-04-26 20:51:24.487205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.472 [2024-04-26 20:51:24.487213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.472 [2024-04-26 20:51:24.487222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:06.472 [2024-04-26 20:51:24.487230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.472 [2024-04-26 20:51:24.487238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:34:06.472 [2024-04-26 20:51:24.497062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:34:06.472 [2024-04-26 20:51:24.507082] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:07.407 20:51:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:07.408 20:51:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.408 20:51:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:07.408 20:51:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:07.408 20:51:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:07.408 20:51:25 -- common/autotest_common.sh@10 -- # set +x 00:34:07.408 20:51:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:07.408 [2024-04-26 20:51:25.545417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:08.347 [2024-04-26 20:51:26.569422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:08.347 [2024-04-26 20:51:26.569493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:34:08.347 [2024-04-26 20:51:26.569531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:34:08.347 [2024-04-26 20:51:26.570143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:34:08.347 [2024-04-26 20:51:26.570180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.347 [2024-04-26 20:51:26.570236] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:08.347 [2024-04-26 20:51:26.570278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.347 [2024-04-26 20:51:26.570304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.347 [2024-04-26 20:51:26.570327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.347 [2024-04-26 20:51:26.570341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.347 [2024-04-26 20:51:26.570357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.347 [2024-04-26 20:51:26.570371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.347 [2024-04-26 20:51:26.570406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.347 [2024-04-26 20:51:26.570421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.347 [2024-04-26 20:51:26.570437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.347 [2024-04-26 20:51:26.570451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.347 [2024-04-26 20:51:26.570467] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:08.347 [2024-04-26 20:51:26.570581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6130000034c0 (9): Bad file descriptor 00:34:08.347 [2024-04-26 20:51:26.571648] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:08.347 [2024-04-26 20:51:26.571665] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:08.347 20:51:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.347 20:51:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:08.347 20:51:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:09.287 20:51:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:09.287 20:51:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.287 20:51:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:09.287 20:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.287 20:51:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:09.287 20:51:27 -- common/autotest_common.sh@10 -- # set +x 00:34:09.287 20:51:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:09.287 20:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.287 20:51:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:09.287 20:51:27 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.545 20:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.545 20:51:27 -- common/autotest_common.sh@10 -- # set +x 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:09.545 20:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:09.545 20:51:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:10.481 [2024-04-26 20:51:28.627668] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:10.481 [2024-04-26 20:51:28.627694] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:10.481 [2024-04-26 20:51:28.627716] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:10.481 20:51:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:10.481 20:51:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:10.481 20:51:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.481 20:51:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.481 20:51:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:10.481 20:51:28 -- common/autotest_common.sh@10 -- # set +x 00:34:10.481 20:51:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:10.481 20:51:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.481 [2024-04-26 20:51:28.755831] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:10.481 20:51:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:10.481 20:51:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:10.481 [2024-04-26 20:51:28.814898] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:10.481 [2024-04-26 20:51:28.814947] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:10.481 [2024-04-26 20:51:28.814979] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:10.481 [2024-04-26 20:51:28.814999] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:10.481 [2024-04-26 20:51:28.815011] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:10.741 [2024-04-26 20:51:28.823668] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x613000004d40 was disconnected and freed. delete nvme_qpair. 00:34:11.680 20:51:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:11.680 20:51:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.680 20:51:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.680 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:34:11.680 20:51:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:11.680 20:51:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:11.680 20:51:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:11.680 20:51:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.680 20:51:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:11.680 20:51:29 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:11.680 20:51:29 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3764405 00:34:11.680 20:51:29 -- common/autotest_common.sh@926 -- # '[' -z 3764405 ']' 00:34:11.680 20:51:29 -- common/autotest_common.sh@930 -- # kill -0 3764405 00:34:11.680 20:51:29 -- common/autotest_common.sh@931 -- # uname 00:34:11.680 20:51:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:11.680 20:51:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3764405 00:34:11.680 20:51:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:11.680 20:51:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:11.680 20:51:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3764405' 00:34:11.680 killing process with pid 3764405 00:34:11.680 20:51:29 -- common/autotest_common.sh@945 -- # kill 3764405 00:34:11.680 20:51:29 -- common/autotest_common.sh@950 -- # wait 3764405 00:34:11.938 20:51:30 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:11.938 20:51:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:11.938 20:51:30 -- nvmf/common.sh@116 -- # sync 00:34:11.938 20:51:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:11.938 20:51:30 -- nvmf/common.sh@119 -- # set +e 00:34:11.938 20:51:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:11.938 20:51:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:11.938 rmmod nvme_tcp 00:34:11.938 rmmod nvme_fabrics 00:34:12.196 rmmod nvme_keyring 00:34:12.196 20:51:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:12.196 20:51:30 -- nvmf/common.sh@123 -- # set -e 00:34:12.196 20:51:30 -- nvmf/common.sh@124 -- # return 0 00:34:12.196 20:51:30 -- nvmf/common.sh@477 -- # '[' -n 3764087 ']' 00:34:12.196 20:51:30 -- nvmf/common.sh@478 -- # killprocess 3764087 00:34:12.196 20:51:30 -- common/autotest_common.sh@926 -- # '[' -z 3764087 ']' 00:34:12.196 20:51:30 -- common/autotest_common.sh@930 -- # kill -0 3764087 00:34:12.196 20:51:30 -- common/autotest_common.sh@931 -- # uname 00:34:12.196 20:51:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:12.196 20:51:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3764087 00:34:12.196 20:51:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:12.196 20:51:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:12.196 20:51:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3764087' 00:34:12.196 killing process with pid 3764087 00:34:12.196 20:51:30 -- common/autotest_common.sh@945 -- # kill 3764087 00:34:12.196 20:51:30 -- common/autotest_common.sh@950 -- # wait 3764087 00:34:12.767 20:51:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:12.767 20:51:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:12.767 20:51:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:12.767 20:51:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:12.767 20:51:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:12.767 20:51:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.767 20:51:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:12.767 20:51:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.680 20:51:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:14.680 00:34:14.680 real 0m22.533s 00:34:14.680 user 0m28.132s 00:34:14.680 sys 0m5.216s 00:34:14.680 20:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.680 20:51:32 -- common/autotest_common.sh@10 -- # set +x 00:34:14.680 ************************************ 00:34:14.680 END TEST nvmf_discovery_remove_ifc 00:34:14.680 ************************************ 00:34:14.680 20:51:32 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:34:14.680 20:51:32 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:14.680 20:51:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:14.680 20:51:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:14.680 20:51:32 -- common/autotest_common.sh@10 -- # set +x 00:34:14.680 ************************************ 00:34:14.680 START TEST nvmf_digest 00:34:14.680 ************************************ 00:34:14.680 20:51:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:14.680 * Looking for test storage... 00:34:14.680 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:34:14.680 20:51:32 -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.680 20:51:32 -- nvmf/common.sh@7 -- # uname -s 00:34:14.680 20:51:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.680 20:51:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.680 20:51:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.680 20:51:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.680 20:51:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.680 20:51:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.680 20:51:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.680 20:51:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.680 20:51:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.680 20:51:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.680 20:51:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:34:14.680 20:51:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:34:14.680 20:51:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.680 20:51:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.680 20:51:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:14.680 20:51:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:14.680 20:51:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.680 20:51:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.680 20:51:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.680 20:51:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.680 20:51:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.680 20:51:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.680 20:51:33 -- paths/export.sh@5 -- # export PATH 00:34:14.680 20:51:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.680 20:51:33 -- nvmf/common.sh@46 -- # : 0 00:34:14.680 20:51:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:14.680 20:51:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:14.680 20:51:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:14.680 20:51:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.680 20:51:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.680 20:51:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:14.680 20:51:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:14.680 20:51:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:14.680 20:51:33 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:14.680 20:51:33 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:14.680 20:51:33 -- host/digest.sh@16 -- # runtime=2 00:34:14.680 20:51:33 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:34:14.680 20:51:33 -- host/digest.sh@132 -- # nvmftestinit 00:34:14.680 20:51:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:14.680 20:51:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.680 20:51:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:14.680 20:51:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:14.680 20:51:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:14.680 20:51:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.680 20:51:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:14.680 20:51:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.940 20:51:33 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:34:14.940 20:51:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:14.940 20:51:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:14.940 20:51:33 -- common/autotest_common.sh@10 -- # set +x 00:34:21.524 20:51:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:21.524 20:51:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:21.524 20:51:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:21.524 20:51:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:21.524 20:51:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:21.524 20:51:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:21.524 20:51:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:21.524 20:51:39 -- nvmf/common.sh@294 -- # net_devs=() 00:34:21.524 20:51:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:21.524 20:51:39 -- nvmf/common.sh@295 -- # e810=() 00:34:21.524 20:51:39 -- nvmf/common.sh@295 -- # local -ga e810 00:34:21.524 20:51:39 -- nvmf/common.sh@296 -- # x722=() 00:34:21.524 20:51:39 -- nvmf/common.sh@296 -- # local -ga x722 00:34:21.524 20:51:39 -- nvmf/common.sh@297 -- # mlx=() 00:34:21.524 20:51:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:21.524 20:51:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.524 20:51:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:21.524 20:51:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:21.524 20:51:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:21.524 20:51:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:34:21.524 Found 0000:27:00.0 (0x8086 - 0x159b) 00:34:21.524 20:51:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:21.524 20:51:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:34:21.524 Found 0000:27:00.1 (0x8086 - 0x159b) 00:34:21.524 20:51:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:21.524 20:51:39 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:21.524 20:51:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.524 20:51:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:21.524 20:51:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.524 20:51:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:34:21.524 Found net devices under 0000:27:00.0: cvl_0_0 00:34:21.524 20:51:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.524 20:51:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:21.524 20:51:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.524 20:51:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:21.524 20:51:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.524 20:51:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:34:21.524 Found net devices under 0000:27:00.1: cvl_0_1 00:34:21.524 20:51:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.524 20:51:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:21.524 20:51:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:21.524 20:51:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:21.524 20:51:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.524 20:51:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.524 20:51:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.524 20:51:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:21.524 20:51:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.524 20:51:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.524 20:51:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:21.524 20:51:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.524 20:51:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.524 20:51:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:21.524 20:51:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:21.524 20:51:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.524 20:51:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.524 20:51:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.524 20:51:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.524 20:51:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:21.524 20:51:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.524 20:51:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.524 20:51:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.524 20:51:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:21.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:34:21.524 00:34:21.524 --- 10.0.0.2 ping statistics --- 00:34:21.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.524 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:34:21.524 20:51:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:34:21.524 00:34:21.524 --- 10.0.0.1 ping statistics --- 00:34:21.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.524 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:34:21.524 20:51:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.524 20:51:39 -- nvmf/common.sh@410 -- # return 0 00:34:21.524 20:51:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:21.524 20:51:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:21.524 20:51:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:21.524 20:51:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:21.525 20:51:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:21.525 20:51:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:21.525 20:51:39 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:21.525 20:51:39 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:34:21.525 20:51:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:21.525 20:51:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:21.525 20:51:39 -- common/autotest_common.sh@10 -- # set +x 00:34:21.525 ************************************ 00:34:21.525 START TEST nvmf_digest_clean 00:34:21.525 ************************************ 00:34:21.525 20:51:39 -- common/autotest_common.sh@1104 -- # run_digest 00:34:21.525 20:51:39 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:34:21.525 20:51:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:21.525 20:51:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:21.525 20:51:39 -- common/autotest_common.sh@10 -- # set +x 00:34:21.525 20:51:39 -- nvmf/common.sh@469 -- # nvmfpid=3771107 00:34:21.525 20:51:39 -- nvmf/common.sh@470 -- # waitforlisten 3771107 00:34:21.525 20:51:39 -- common/autotest_common.sh@819 -- # '[' -z 3771107 ']' 00:34:21.525 20:51:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.525 20:51:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:21.525 20:51:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.525 20:51:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:21.525 20:51:39 -- common/autotest_common.sh@10 -- # set +x 00:34:21.525 20:51:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:21.525 [2024-04-26 20:51:39.333608] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:21.525 [2024-04-26 20:51:39.333725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.525 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.525 [2024-04-26 20:51:39.465877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.525 [2024-04-26 20:51:39.567524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:21.525 [2024-04-26 20:51:39.567696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.525 [2024-04-26 20:51:39.567710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.525 [2024-04-26 20:51:39.567720] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.525 [2024-04-26 20:51:39.567755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.785 20:51:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:21.785 20:51:40 -- common/autotest_common.sh@852 -- # return 0 00:34:21.785 20:51:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:21.785 20:51:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:21.785 20:51:40 -- common/autotest_common.sh@10 -- # set +x 00:34:21.785 20:51:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.785 20:51:40 -- host/digest.sh@120 -- # common_target_config 00:34:21.785 20:51:40 -- host/digest.sh@43 -- # rpc_cmd 00:34:21.785 20:51:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.785 20:51:40 -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 null0 00:34:22.047 [2024-04-26 20:51:40.233183] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.047 [2024-04-26 20:51:40.257356] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.047 20:51:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:22.047 20:51:40 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:34:22.047 20:51:40 -- host/digest.sh@77 -- # local rw bs qd 00:34:22.047 20:51:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:22.047 20:51:40 -- host/digest.sh@80 -- # rw=randread 00:34:22.047 20:51:40 -- host/digest.sh@80 -- # bs=4096 00:34:22.047 20:51:40 -- host/digest.sh@80 -- # qd=128 00:34:22.047 20:51:40 -- host/digest.sh@82 -- # bperfpid=3771399 00:34:22.047 20:51:40 -- host/digest.sh@83 -- # waitforlisten 3771399 /var/tmp/bperf.sock 00:34:22.047 20:51:40 -- common/autotest_common.sh@819 -- # '[' -z 3771399 ']' 00:34:22.047 20:51:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:22.047 20:51:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:22.047 20:51:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:22.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:22.047 20:51:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:22.047 20:51:40 -- common/autotest_common.sh@10 -- # set +x 00:34:22.047 20:51:40 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:22.047 [2024-04-26 20:51:40.334983] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:22.047 [2024-04-26 20:51:40.335100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771399 ] 00:34:22.308 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.308 [2024-04-26 20:51:40.434059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.309 [2024-04-26 20:51:40.522794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.881 20:51:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:22.881 20:51:41 -- common/autotest_common.sh@852 -- # return 0 00:34:22.881 20:51:41 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:34:22.881 20:51:41 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:34:22.881 20:51:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:34:22.881 [2024-04-26 20:51:41.159344] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:34:22.881 20:51:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:22.881 20:51:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:29.457 20:51:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:29.457 20:51:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:29.458 nvme0n1 00:34:29.458 20:51:47 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:29.458 20:51:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:29.715 Running I/O for 2 seconds... 00:34:31.619 00:34:31.619 Latency(us) 00:34:31.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.619 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:31.619 nvme0n1 : 2.04 20551.50 80.28 0.00 0.00 6099.97 2069.56 46634.04 00:34:31.619 =================================================================================================================== 00:34:31.619 Total : 20551.50 80.28 0.00 0.00 6099.97 2069.56 46634.04 00:34:31.619 0 00:34:31.619 20:51:49 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:31.619 20:51:49 -- host/digest.sh@92 -- # get_accel_stats 00:34:31.619 20:51:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:31.619 20:51:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:31.619 | select(.opcode=="crc32c") 00:34:31.619 | "\(.module_name) \(.executed)"' 00:34:31.619 20:51:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:31.879 20:51:50 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:34:31.879 20:51:50 -- host/digest.sh@93 -- # exp_module=dsa 00:34:31.879 20:51:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:31.879 20:51:50 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:34:31.879 20:51:50 -- host/digest.sh@97 -- # killprocess 3771399 00:34:31.879 20:51:50 -- common/autotest_common.sh@926 -- # '[' -z 3771399 ']' 00:34:31.879 20:51:50 -- common/autotest_common.sh@930 -- # kill -0 3771399 00:34:31.879 20:51:50 -- common/autotest_common.sh@931 -- # uname 00:34:31.879 20:51:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:31.879 20:51:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3771399 00:34:31.879 20:51:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:31.879 20:51:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:31.879 20:51:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3771399' 00:34:31.879 killing process with pid 3771399 00:34:31.879 20:51:50 -- common/autotest_common.sh@945 -- # kill 3771399 00:34:31.879 Received shutdown signal, test time was about 2.000000 seconds 00:34:31.879 00:34:31.879 Latency(us) 00:34:31.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.879 =================================================================================================================== 00:34:31.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:31.879 20:51:50 -- common/autotest_common.sh@950 -- # wait 3771399 00:34:33.789 20:51:52 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:34:33.789 20:51:52 -- host/digest.sh@77 -- # local rw bs qd 00:34:33.789 20:51:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:33.789 20:51:52 -- host/digest.sh@80 -- # rw=randread 00:34:33.789 20:51:52 -- host/digest.sh@80 -- # bs=131072 00:34:33.789 20:51:52 -- host/digest.sh@80 -- # qd=16 00:34:33.789 20:51:52 -- host/digest.sh@82 -- # bperfpid=3773542 00:34:33.789 20:51:52 -- host/digest.sh@83 -- # waitforlisten 3773542 /var/tmp/bperf.sock 00:34:33.789 20:51:52 -- common/autotest_common.sh@819 -- # '[' -z 3773542 ']' 00:34:33.789 20:51:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:33.789 20:51:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:33.789 20:51:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:33.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:33.789 20:51:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:33.789 20:51:52 -- common/autotest_common.sh@10 -- # set +x 00:34:33.789 20:51:52 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:33.789 [2024-04-26 20:51:52.127042] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:33.789 [2024-04-26 20:51:52.127189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773542 ] 00:34:33.789 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:33.789 Zero copy mechanism will not be used. 00:34:34.049 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.049 [2024-04-26 20:51:52.257027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.049 [2024-04-26 20:51:52.353297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.616 20:51:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:34.616 20:51:52 -- common/autotest_common.sh@852 -- # return 0 00:34:34.616 20:51:52 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:34:34.616 20:51:52 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:34:34.616 20:51:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:34:34.874 [2024-04-26 20:51:52.961871] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:34:34.874 20:51:52 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:34.874 20:51:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:41.442 20:51:59 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:41.442 20:51:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:41.442 nvme0n1 00:34:41.442 20:51:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:41.442 20:51:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:41.442 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:41.442 Zero copy mechanism will not be used. 00:34:41.442 Running I/O for 2 seconds... 00:34:43.347 00:34:43.347 Latency(us) 00:34:43.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.347 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:43.347 nvme0n1 : 2.00 6079.89 759.99 0.00 0.00 2628.75 663.98 5760.27 00:34:43.347 =================================================================================================================== 00:34:43.347 Total : 6079.89 759.99 0.00 0.00 2628.75 663.98 5760.27 00:34:43.347 0 00:34:43.347 20:52:01 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:43.347 20:52:01 -- host/digest.sh@92 -- # get_accel_stats 00:34:43.347 20:52:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:43.347 20:52:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:43.347 20:52:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:43.347 | select(.opcode=="crc32c") 00:34:43.347 | "\(.module_name) \(.executed)"' 00:34:43.608 20:52:01 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:34:43.608 20:52:01 -- host/digest.sh@93 -- # exp_module=dsa 00:34:43.608 20:52:01 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:43.608 20:52:01 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:34:43.608 20:52:01 -- host/digest.sh@97 -- # killprocess 3773542 00:34:43.608 20:52:01 -- common/autotest_common.sh@926 -- # '[' -z 3773542 ']' 00:34:43.608 20:52:01 -- common/autotest_common.sh@930 -- # kill -0 3773542 00:34:43.608 20:52:01 -- common/autotest_common.sh@931 -- # uname 00:34:43.608 20:52:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:43.608 20:52:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3773542 00:34:43.608 20:52:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:43.608 20:52:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:43.608 20:52:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3773542' 00:34:43.608 killing process with pid 3773542 00:34:43.608 20:52:01 -- common/autotest_common.sh@945 -- # kill 3773542 00:34:43.608 Received shutdown signal, test time was about 2.000000 seconds 00:34:43.608 00:34:43.608 Latency(us) 00:34:43.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.608 =================================================================================================================== 00:34:43.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:43.608 20:52:01 -- common/autotest_common.sh@950 -- # wait 3773542 00:34:45.695 20:52:03 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:34:45.695 20:52:03 -- host/digest.sh@77 -- # local rw bs qd 00:34:45.695 20:52:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:45.695 20:52:03 -- host/digest.sh@80 -- # rw=randwrite 00:34:45.695 20:52:03 -- host/digest.sh@80 -- # bs=4096 00:34:45.695 20:52:03 -- host/digest.sh@80 -- # qd=128 00:34:45.695 20:52:03 -- host/digest.sh@82 -- # bperfpid=3775884 00:34:45.695 20:52:03 -- host/digest.sh@83 -- # waitforlisten 3775884 /var/tmp/bperf.sock 00:34:45.695 20:52:03 -- common/autotest_common.sh@819 -- # '[' -z 3775884 ']' 00:34:45.695 20:52:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:45.695 20:52:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:45.695 20:52:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:45.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:45.695 20:52:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:45.695 20:52:03 -- common/autotest_common.sh@10 -- # set +x 00:34:45.695 20:52:03 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:45.695 [2024-04-26 20:52:03.759070] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:45.695 [2024-04-26 20:52:03.759221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775884 ] 00:34:45.695 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.695 [2024-04-26 20:52:03.890791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.695 [2024-04-26 20:52:03.980821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.260 20:52:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:46.260 20:52:04 -- common/autotest_common.sh@852 -- # return 0 00:34:46.260 20:52:04 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:34:46.260 20:52:04 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:34:46.260 20:52:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:34:46.260 [2024-04-26 20:52:04.573383] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:34:46.260 20:52:04 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:46.260 20:52:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:52.832 20:52:10 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.832 20:52:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.832 nvme0n1 00:34:52.832 20:52:11 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:52.832 20:52:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:53.090 Running I/O for 2 seconds... 00:34:55.001 00:34:55.001 Latency(us) 00:34:55.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.001 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:55.001 nvme0n1 : 2.00 28044.05 109.55 0.00 0.00 4556.36 2552.45 7795.33 00:34:55.001 =================================================================================================================== 00:34:55.001 Total : 28044.05 109.55 0.00 0.00 4556.36 2552.45 7795.33 00:34:55.001 0 00:34:55.001 20:52:13 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:55.001 20:52:13 -- host/digest.sh@92 -- # get_accel_stats 00:34:55.001 20:52:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:55.001 20:52:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:55.001 | select(.opcode=="crc32c") 00:34:55.001 | "\(.module_name) \(.executed)"' 00:34:55.001 20:52:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:55.259 20:52:13 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:34:55.259 20:52:13 -- host/digest.sh@93 -- # exp_module=dsa 00:34:55.259 20:52:13 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:55.259 20:52:13 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:34:55.259 20:52:13 -- host/digest.sh@97 -- # killprocess 3775884 00:34:55.259 20:52:13 -- common/autotest_common.sh@926 -- # '[' -z 3775884 ']' 00:34:55.259 20:52:13 -- common/autotest_common.sh@930 -- # kill -0 3775884 00:34:55.259 20:52:13 -- common/autotest_common.sh@931 -- # uname 00:34:55.259 20:52:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:55.259 20:52:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3775884 00:34:55.259 20:52:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:55.259 20:52:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:55.259 20:52:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3775884' 00:34:55.259 killing process with pid 3775884 00:34:55.259 20:52:13 -- common/autotest_common.sh@945 -- # kill 3775884 00:34:55.259 Received shutdown signal, test time was about 2.000000 seconds 00:34:55.259 00:34:55.259 Latency(us) 00:34:55.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.259 =================================================================================================================== 00:34:55.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:55.259 20:52:13 -- common/autotest_common.sh@950 -- # wait 3775884 00:34:57.163 20:52:15 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:34:57.163 20:52:15 -- host/digest.sh@77 -- # local rw bs qd 00:34:57.163 20:52:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:57.163 20:52:15 -- host/digest.sh@80 -- # rw=randwrite 00:34:57.163 20:52:15 -- host/digest.sh@80 -- # bs=131072 00:34:57.163 20:52:15 -- host/digest.sh@80 -- # qd=16 00:34:57.163 20:52:15 -- host/digest.sh@82 -- # bperfpid=3778090 00:34:57.163 20:52:15 -- host/digest.sh@83 -- # waitforlisten 3778090 /var/tmp/bperf.sock 00:34:57.163 20:52:15 -- common/autotest_common.sh@819 -- # '[' -z 3778090 ']' 00:34:57.164 20:52:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.164 20:52:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:57.164 20:52:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.164 20:52:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:57.164 20:52:15 -- common/autotest_common.sh@10 -- # set +x 00:34:57.164 20:52:15 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:57.164 [2024-04-26 20:52:15.385578] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:57.164 [2024-04-26 20:52:15.385701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778090 ] 00:34:57.164 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.164 Zero copy mechanism will not be used. 00:34:57.164 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.164 [2024-04-26 20:52:15.496964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.421 [2024-04-26 20:52:15.591832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.987 20:52:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:57.987 20:52:16 -- common/autotest_common.sh@852 -- # return 0 00:34:57.987 20:52:16 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:34:57.987 20:52:16 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:34:57.987 20:52:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:34:57.987 [2024-04-26 20:52:16.192329] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:34:57.987 20:52:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:57.987 20:52:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:04.561 20:52:22 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.561 20:52:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.561 nvme0n1 00:35:04.561 20:52:22 -- host/digest.sh@91 -- # bperf_py perform_tests 00:35:04.561 20:52:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.561 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.561 Zero copy mechanism will not be used. 00:35:04.561 Running I/O for 2 seconds... 00:35:07.094 00:35:07.094 Latency(us) 00:35:07.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.094 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:07.094 nvme0n1 : 2.00 5625.85 703.23 0.00 0.00 2840.01 1862.60 12417.35 00:35:07.094 =================================================================================================================== 00:35:07.094 Total : 5625.85 703.23 0.00 0.00 2840.01 1862.60 12417.35 00:35:07.094 0 00:35:07.094 20:52:24 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:35:07.094 20:52:24 -- host/digest.sh@92 -- # get_accel_stats 00:35:07.094 20:52:24 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:07.094 20:52:24 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:07.094 | select(.opcode=="crc32c") 00:35:07.094 | "\(.module_name) \(.executed)"' 00:35:07.094 20:52:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:07.094 20:52:25 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:35:07.094 20:52:25 -- host/digest.sh@93 -- # exp_module=dsa 00:35:07.094 20:52:25 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:35:07.094 20:52:25 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:35:07.094 20:52:25 -- host/digest.sh@97 -- # killprocess 3778090 00:35:07.094 20:52:25 -- common/autotest_common.sh@926 -- # '[' -z 3778090 ']' 00:35:07.094 20:52:25 -- common/autotest_common.sh@930 -- # kill -0 3778090 00:35:07.094 20:52:25 -- common/autotest_common.sh@931 -- # uname 00:35:07.094 20:52:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:07.094 20:52:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3778090 00:35:07.094 20:52:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:07.094 20:52:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:07.094 20:52:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3778090' 00:35:07.094 killing process with pid 3778090 00:35:07.094 20:52:25 -- common/autotest_common.sh@945 -- # kill 3778090 00:35:07.094 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.094 00:35:07.094 Latency(us) 00:35:07.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.094 =================================================================================================================== 00:35:07.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.094 20:52:25 -- common/autotest_common.sh@950 -- # wait 3778090 00:35:09.002 20:52:26 -- host/digest.sh@126 -- # killprocess 3771107 00:35:09.002 20:52:26 -- common/autotest_common.sh@926 -- # '[' -z 3771107 ']' 00:35:09.002 20:52:26 -- common/autotest_common.sh@930 -- # kill -0 3771107 00:35:09.002 20:52:26 -- common/autotest_common.sh@931 -- # uname 00:35:09.002 20:52:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:09.002 20:52:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3771107 00:35:09.002 20:52:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:09.002 20:52:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:09.002 20:52:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3771107' 00:35:09.002 killing process with pid 3771107 00:35:09.002 20:52:26 -- common/autotest_common.sh@945 -- # kill 3771107 00:35:09.002 20:52:26 -- common/autotest_common.sh@950 -- # wait 3771107 00:35:09.261 00:35:09.261 real 0m48.210s 00:35:09.261 user 1m8.575s 00:35:09.261 sys 0m3.700s 00:35:09.261 20:52:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:09.261 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:35:09.261 ************************************ 00:35:09.261 END TEST nvmf_digest_clean 00:35:09.261 ************************************ 00:35:09.261 20:52:27 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:35:09.261 20:52:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:09.261 20:52:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:09.261 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:35:09.261 ************************************ 00:35:09.261 START TEST nvmf_digest_error 00:35:09.261 ************************************ 00:35:09.261 20:52:27 -- common/autotest_common.sh@1104 -- # run_digest_error 00:35:09.261 20:52:27 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:35:09.261 20:52:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:09.261 20:52:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:09.261 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:35:09.261 20:52:27 -- nvmf/common.sh@469 -- # nvmfpid=3780525 00:35:09.261 20:52:27 -- nvmf/common.sh@470 -- # waitforlisten 3780525 00:35:09.261 20:52:27 -- common/autotest_common.sh@819 -- # '[' -z 3780525 ']' 00:35:09.261 20:52:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.261 20:52:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:09.261 20:52:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.261 20:52:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:09.261 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:35:09.261 20:52:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:09.261 [2024-04-26 20:52:27.550931] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:09.261 [2024-04-26 20:52:27.551010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:09.521 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.521 [2024-04-26 20:52:27.642075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.521 [2024-04-26 20:52:27.738191] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:09.521 [2024-04-26 20:52:27.738362] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:09.521 [2024-04-26 20:52:27.738374] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:09.521 [2024-04-26 20:52:27.738388] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:09.521 [2024-04-26 20:52:27.738413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.090 20:52:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:10.090 20:52:28 -- common/autotest_common.sh@852 -- # return 0 00:35:10.090 20:52:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:10.090 20:52:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:10.090 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:35:10.090 20:52:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:10.090 20:52:28 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:10.090 20:52:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:10.090 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:35:10.090 [2024-04-26 20:52:28.302911] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:10.090 20:52:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:10.090 20:52:28 -- host/digest.sh@104 -- # common_target_config 00:35:10.090 20:52:28 -- host/digest.sh@43 -- # rpc_cmd 00:35:10.090 20:52:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:10.090 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:35:10.351 null0 00:35:10.351 [2024-04-26 20:52:28.476627] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.351 [2024-04-26 20:52:28.500834] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.351 20:52:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:10.351 20:52:28 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:35:10.351 20:52:28 -- host/digest.sh@54 -- # local rw bs qd 00:35:10.351 20:52:28 -- host/digest.sh@56 -- # rw=randread 00:35:10.351 20:52:28 -- host/digest.sh@56 -- # bs=4096 00:35:10.351 20:52:28 -- host/digest.sh@56 -- # qd=128 00:35:10.351 20:52:28 -- host/digest.sh@58 -- # bperfpid=3780673 00:35:10.351 20:52:28 -- host/digest.sh@60 -- # waitforlisten 3780673 /var/tmp/bperf.sock 00:35:10.351 20:52:28 -- common/autotest_common.sh@819 -- # '[' -z 3780673 ']' 00:35:10.351 20:52:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:10.351 20:52:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:10.351 20:52:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:10.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:10.351 20:52:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:10.351 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:35:10.351 20:52:28 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:10.351 [2024-04-26 20:52:28.578612] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:10.351 [2024-04-26 20:52:28.578727] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780673 ] 00:35:10.351 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.610 [2024-04-26 20:52:28.696240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.610 [2024-04-26 20:52:28.792588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.177 20:52:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:11.177 20:52:29 -- common/autotest_common.sh@852 -- # return 0 00:35:11.177 20:52:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.177 20:52:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.177 20:52:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:11.177 20:52:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.177 20:52:29 -- common/autotest_common.sh@10 -- # set +x 00:35:11.177 20:52:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.177 20:52:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.177 20:52:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.436 nvme0n1 00:35:11.436 20:52:29 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:11.436 20:52:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.436 20:52:29 -- common/autotest_common.sh@10 -- # set +x 00:35:11.436 20:52:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.436 20:52:29 -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:11.436 20:52:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:11.436 Running I/O for 2 seconds... 00:35:11.436 [2024-04-26 20:52:29.731573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.436 [2024-04-26 20:52:29.731618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-04-26 20:52:29.731631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.436 [2024-04-26 20:52:29.743126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.436 [2024-04-26 20:52:29.743158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-04-26 20:52:29.743171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.436 [2024-04-26 20:52:29.755261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.436 [2024-04-26 20:52:29.755291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-04-26 20:52:29.755303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.436 [2024-04-26 20:52:29.767691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.436 [2024-04-26 20:52:29.767718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-04-26 20:52:29.767729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.780524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.780552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.780562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.793060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.793091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.793102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.804588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.804614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.804624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.817521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.817548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.817563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.826025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.826050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.826060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.838164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.838192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.838203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.851075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.851100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.851110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.863332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.863358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.863368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.876281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.876307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.876317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.888141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.888167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.888177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.900387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.900415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.900426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.913200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.913224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.913234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.925175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.925216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.925226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.933720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.933748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.933758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.944940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.944966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.944976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.956707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.956734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.956744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.969607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.969639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.969652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.983063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.983088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.983097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:29.995600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:29.995630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.695 [2024-04-26 20:52:29.995641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.695 [2024-04-26 20:52:30.011158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.695 [2024-04-26 20:52:30.011198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.696 [2024-04-26 20:52:30.011212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.696 [2024-04-26 20:52:30.027106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.696 [2024-04-26 20:52:30.027144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.696 [2024-04-26 20:52:30.027162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.038693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.038726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.038738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.048469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.048512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.048528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.061613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.061644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.061655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.071080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.071111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.071122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.079594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.079622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.079632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.089829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.089855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.089865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.101768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.101801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.101813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.114742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.114768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.114778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.127211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.127237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.127246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.139804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.139828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.139838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.152318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.957 [2024-04-26 20:52:30.152345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.957 [2024-04-26 20:52:30.152355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.957 [2024-04-26 20:52:30.164400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.164428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.164439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.176960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.176987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.176997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.189584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.189611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.189621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.201996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.202022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.202031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.214594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.214623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.214634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.227283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.227310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.227325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.239796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.239822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.239831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.251940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.251963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.251973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.264706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.264734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.264744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.276563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.276589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.276599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.958 [2024-04-26 20:52:30.289241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:11.958 [2024-04-26 20:52:30.289270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.958 [2024-04-26 20:52:30.289280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.301836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.301867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.301877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.314926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.314954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.314964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.326681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.326709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.326719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.336469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.336497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.336507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.345189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.345214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.345223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.356738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.356763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.356773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.368752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.368777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.368786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.381836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.381860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.381870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.393978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.394008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.394019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.405710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.405736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.405746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.417642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.417669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.417679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.429272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.429300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.429315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.441107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.441134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.441145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.453624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.453649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.453659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.465726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.465750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.465759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.477984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.478008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.478018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.220 [2024-04-26 20:52:30.489199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.220 [2024-04-26 20:52:30.489229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.220 [2024-04-26 20:52:30.489240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.221 [2024-04-26 20:52:30.497933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.221 [2024-04-26 20:52:30.497960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.221 [2024-04-26 20:52:30.497970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.221 [2024-04-26 20:52:30.510564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.221 [2024-04-26 20:52:30.510591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.221 [2024-04-26 20:52:30.510601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.221 [2024-04-26 20:52:30.522676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.221 [2024-04-26 20:52:30.522701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.221 [2024-04-26 20:52:30.522711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.221 [2024-04-26 20:52:30.534686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.221 [2024-04-26 20:52:30.534721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.221 [2024-04-26 20:52:30.534732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.221 [2024-04-26 20:52:30.547173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.221 [2024-04-26 20:52:30.547199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.221 [2024-04-26 20:52:30.547209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.221 [2024-04-26 20:52:30.559057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.221 [2024-04-26 20:52:30.559083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.221 [2024-04-26 20:52:30.559093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.570652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.570685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.570697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.581760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.581788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.581798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.589228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.589253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.589263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.600459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.600489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.600500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.612403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.612430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.612440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.625041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.625066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.625081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.637261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.637288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.637298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.649490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.649514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.649523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.661702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.661727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.661737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.673802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.673826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.673836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.685795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.685826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.685836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.697496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.697522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.697532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.709700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.709725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.709735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.721280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.721304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.721314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.733746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.733780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.733790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.746664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.746689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.746699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.759054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.759078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.759088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.771506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.771532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.771542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.784055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.784080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.784090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.483 [2024-04-26 20:52:30.796512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.483 [2024-04-26 20:52:30.796536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.483 [2024-04-26 20:52:30.796546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.484 [2024-04-26 20:52:30.808825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.484 [2024-04-26 20:52:30.808850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.484 [2024-04-26 20:52:30.808860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.484 [2024-04-26 20:52:30.820187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.484 [2024-04-26 20:52:30.820212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.484 [2024-04-26 20:52:30.820222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.831569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.831595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.831609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.844587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.844613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.844622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.856433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.856460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.856471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.869392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.869418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.869427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.881773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.881800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.881812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.893326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.893357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.893367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.905490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.905515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.905525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.917697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.917721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.917730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.929888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.929912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.929921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.941836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.941866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.941882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.954096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.954125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.954135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.966146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.966171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.966180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.978167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.978192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.978202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:30.990043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:30.990067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:30.990076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:31.002255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:31.002282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:31.002292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:31.014871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:31.014896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:31.014905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:31.027083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:31.027108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:31.027117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:31.039153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:31.039177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:31.039190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:31.051264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:31.051288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:31.051298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:31.063539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:31.063573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:31.063583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.746 [2024-04-26 20:52:31.075716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:12.746 [2024-04-26 20:52:31.075741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.746 [2024-04-26 20:52:31.075751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.087847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.087874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.087885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.099977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.100003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.100013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.113593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.113625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.113638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.127518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.127544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.127554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.138951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.138976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.138985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.151193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.151227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.151237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.163423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.163449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.163458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.175543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.175567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.175578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.187787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.187812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.187822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.199806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.199830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.199840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.212014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.212044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.212055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.226182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.226208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.226219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.238541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.238566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.238576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.251022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.251048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.251062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.263024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.263048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.263058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.275060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.006 [2024-04-26 20:52:31.275086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.006 [2024-04-26 20:52:31.275096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.006 [2024-04-26 20:52:31.287199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.007 [2024-04-26 20:52:31.287224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.007 [2024-04-26 20:52:31.287235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.007 [2024-04-26 20:52:31.298872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.007 [2024-04-26 20:52:31.298897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.007 [2024-04-26 20:52:31.298906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.007 [2024-04-26 20:52:31.311858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.007 [2024-04-26 20:52:31.311882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.007 [2024-04-26 20:52:31.311892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.007 [2024-04-26 20:52:31.324127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.007 [2024-04-26 20:52:31.324154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.007 [2024-04-26 20:52:31.324164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.007 [2024-04-26 20:52:31.336038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.007 [2024-04-26 20:52:31.336062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.007 [2024-04-26 20:52:31.336072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.348137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.348160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.348170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.360828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.360861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.360872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.372743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.372768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.372777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.385751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.385775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.385784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.398066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.398090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.398100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.410318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.410346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.410356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.422806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.422831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.422840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.439648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.439673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.439683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.451791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.451818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.451829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.463410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.463436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.463445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.475533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.475567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.475577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.487831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.265 [2024-04-26 20:52:31.487857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.265 [2024-04-26 20:52:31.487867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.265 [2024-04-26 20:52:31.500241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.500266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.500275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.266 [2024-04-26 20:52:31.512543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.512567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.512576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.266 [2024-04-26 20:52:31.527232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.527260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.527270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.266 [2024-04-26 20:52:31.539160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.539186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.539196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.266 [2024-04-26 20:52:31.551348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.551373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.551387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.266 [2024-04-26 20:52:31.563723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.563750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.563760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.266 [2024-04-26 20:52:31.576222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.576254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.576264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.266 [2024-04-26 20:52:31.588780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.588806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.588815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.266 [2024-04-26 20:52:31.601072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.266 [2024-04-26 20:52:31.601097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.266 [2024-04-26 20:52:31.601106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.612908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.612935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.612946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.625061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.625087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.625096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.637459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.637483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.637493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.650016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.650042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.650052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.662575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.662600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.662609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.674762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.674786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.674796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.686794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.686821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.686831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.698625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.698650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.698659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.710739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.710765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.710774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 [2024-04-26 20:52:31.722576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:13.524 [2024-04-26 20:52:31.722602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.524 [2024-04-26 20:52:31.722612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.524 00:35:13.524 Latency(us) 00:35:13.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.524 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:13.524 nvme0n1 : 2.01 21012.58 82.08 0.00 0.00 6086.96 1914.34 18763.99 00:35:13.524 =================================================================================================================== 00:35:13.524 Total : 21012.58 82.08 0.00 0.00 6086.96 1914.34 18763.99 00:35:13.524 0 00:35:13.524 20:52:31 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:13.524 20:52:31 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:13.524 20:52:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:13.524 20:52:31 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:13.524 | .driver_specific 00:35:13.524 | .nvme_error 00:35:13.524 | .status_code 00:35:13.524 | .command_transient_transport_error' 00:35:13.783 20:52:31 -- host/digest.sh@71 -- # (( 165 > 0 )) 00:35:13.783 20:52:31 -- host/digest.sh@73 -- # killprocess 3780673 00:35:13.783 20:52:31 -- common/autotest_common.sh@926 -- # '[' -z 3780673 ']' 00:35:13.783 20:52:31 -- common/autotest_common.sh@930 -- # kill -0 3780673 00:35:13.783 20:52:31 -- common/autotest_common.sh@931 -- # uname 00:35:13.783 20:52:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:13.783 20:52:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3780673 00:35:13.783 20:52:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:13.783 20:52:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:13.783 20:52:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3780673' 00:35:13.783 killing process with pid 3780673 00:35:13.783 20:52:31 -- common/autotest_common.sh@945 -- # kill 3780673 00:35:13.783 Received shutdown signal, test time was about 2.000000 seconds 00:35:13.783 00:35:13.783 Latency(us) 00:35:13.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.783 =================================================================================================================== 00:35:13.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:13.783 20:52:31 -- common/autotest_common.sh@950 -- # wait 3780673 00:35:14.042 20:52:32 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:35:14.042 20:52:32 -- host/digest.sh@54 -- # local rw bs qd 00:35:14.042 20:52:32 -- host/digest.sh@56 -- # rw=randread 00:35:14.042 20:52:32 -- host/digest.sh@56 -- # bs=131072 00:35:14.042 20:52:32 -- host/digest.sh@56 -- # qd=16 00:35:14.042 20:52:32 -- host/digest.sh@58 -- # bperfpid=3781472 00:35:14.043 20:52:32 -- host/digest.sh@60 -- # waitforlisten 3781472 /var/tmp/bperf.sock 00:35:14.043 20:52:32 -- common/autotest_common.sh@819 -- # '[' -z 3781472 ']' 00:35:14.043 20:52:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.043 20:52:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:14.043 20:52:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.043 20:52:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:14.043 20:52:32 -- common/autotest_common.sh@10 -- # set +x 00:35:14.043 20:52:32 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:14.043 [2024-04-26 20:52:32.351132] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:14.043 [2024-04-26 20:52:32.351246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781472 ] 00:35:14.043 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.043 Zero copy mechanism will not be used. 00:35:14.303 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.303 [2024-04-26 20:52:32.462748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.303 [2024-04-26 20:52:32.556928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.874 20:52:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:14.874 20:52:33 -- common/autotest_common.sh@852 -- # return 0 00:35:14.874 20:52:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:14.874 20:52:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:14.874 20:52:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:14.874 20:52:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.874 20:52:33 -- common/autotest_common.sh@10 -- # set +x 00:35:14.874 20:52:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.874 20:52:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.874 20:52:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.131 nvme0n1 00:35:15.131 20:52:33 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:15.131 20:52:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:15.131 20:52:33 -- common/autotest_common.sh@10 -- # set +x 00:35:15.131 20:52:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:15.131 20:52:33 -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:15.131 20:52:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.131 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.131 Zero copy mechanism will not be used. 00:35:15.131 Running I/O for 2 seconds... 00:35:15.131 [2024-04-26 20:52:33.470690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.131 [2024-04-26 20:52:33.470742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.131 [2024-04-26 20:52:33.470762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.479281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.479316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.479327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.487592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.487618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.487628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.496041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.496066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.496076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.504360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.504391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.504401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.512779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.512803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.512813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.521066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.521090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.521099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.529377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.529406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.529416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.537743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.537767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.537777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.391 [2024-04-26 20:52:33.546118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.391 [2024-04-26 20:52:33.546141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.391 [2024-04-26 20:52:33.546150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.554400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.554423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.554432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.562716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.562741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.562751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.570940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.570965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.570974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.579312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.579337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.579346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.587619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.587642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.587652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.595971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.595994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.596003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.604223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.604255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.604265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.612510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.612534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.612548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.620722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.620745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.620755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.628974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.628996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.629005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.637200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.637225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.637234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.645521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.645543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.645553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.653738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.653760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.653769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.661989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.662012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.662021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.670249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.670270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.670280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.678512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.678534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.678544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.686702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.686726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.686736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.694950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.694973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.694983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.703226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.703248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.703257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.711531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.711553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.711562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.719786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.719809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.719818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.392 [2024-04-26 20:52:33.728464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.392 [2024-04-26 20:52:33.728487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.392 [2024-04-26 20:52:33.728497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.651 [2024-04-26 20:52:33.737208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.651 [2024-04-26 20:52:33.737231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.737240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.745563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.745585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.745594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.751560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.751583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.751600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.757271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.757293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.763320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.763342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.763352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.769520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.769543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.769553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.775890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.775916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.775927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.782571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.782596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.782607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.788481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.788504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.788514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.795801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.795824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.795834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.801973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.801996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.802006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.808439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.808462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.808471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.814943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.814967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.814976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.821446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.821470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.821480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.827895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.827918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.827927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.834173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.834195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.834204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.840457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.840480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.840489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.846719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.846741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.846751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.853014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.853037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.853046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.859322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.859348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.859363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.865664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.865687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.865697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.872183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.872206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.872215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.879067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.879090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.879099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.887157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.887180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.887189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.894903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.894926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.894935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.902451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.902476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.902486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.908142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.652 [2024-04-26 20:52:33.908165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.652 [2024-04-26 20:52:33.908175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.652 [2024-04-26 20:52:33.913922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.913945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.913954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.920498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.920521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.920531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.925743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.925766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.925776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.932969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.932993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.933002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.940558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.940583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.940594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.948949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.948973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.948983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.956862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.956885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.956895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.964015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.964039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.964048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.971665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.971689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.971699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.979322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.979344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.979359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.653 [2024-04-26 20:52:33.987029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.653 [2024-04-26 20:52:33.987053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.653 [2024-04-26 20:52:33.987063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:33.995761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:33.995784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:33.995793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.004947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.004975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.004986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.013864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.013888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.013898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.023530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.023555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.023564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.032989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.033012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.033021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.040342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.040364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.040374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.046123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.046146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.046155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.052592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.052614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.052623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.058784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.058807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.058816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.064677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.064699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.064708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.069767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.069789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.069798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.076126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.076151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.076162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.080593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.080620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.080629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.083795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.083818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.083828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.912 [2024-04-26 20:52:34.088778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.912 [2024-04-26 20:52:34.088801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.912 [2024-04-26 20:52:34.088810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.093747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.093770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.093784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.099289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.099312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.099322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.105848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.105874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.105885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.112878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.112902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.112912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.120030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.120053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.120063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.126865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.126889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.126899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.133775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.133799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.133809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.140607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.140631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.140641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.147545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.147568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.147577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.154539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.154561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.154571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.161192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.161214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.161224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.167728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.167750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.167759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.174999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.175029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.175042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.181588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.181613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.181623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.187905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.187929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.187939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.194739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.194763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.194773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.201821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.201845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.201863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.209108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.209131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.209146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.216257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.216282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.216291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.222631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.222655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.222666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.230689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.230713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.230723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.238271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.238295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.238305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.245455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.245478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.245487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.913 [2024-04-26 20:52:34.252327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:15.913 [2024-04-26 20:52:34.252350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.913 [2024-04-26 20:52:34.252359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.258809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.258835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.258844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.265160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.265183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.265193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.271638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.271663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.271672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.278208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.278232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.278242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.284601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.284625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.284635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.291025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.291048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.291057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.297433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.297455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.297465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.303690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.303714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.303724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.310083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.310108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.310117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.316519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.316542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.316552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.322837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.322861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.322875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.329359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.174 [2024-04-26 20:52:34.329387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.174 [2024-04-26 20:52:34.329396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.174 [2024-04-26 20:52:34.335916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.335940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.335950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.342233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.342256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.342266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.348714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.348736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.348746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.355198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.355220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.355229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.361732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.361755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.361764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.368164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.368188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.368198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.374404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.374427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.374436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.380774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.380805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.380814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.387178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.387202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.387212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.393442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.393465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.393475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.399736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.399759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.399768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.406159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.406181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.406191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.412494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.412519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.412529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.418882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.418906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.418916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.425244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.425268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.425278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.431748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.431771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.431785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.438030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.438053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.438063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.444487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.444510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.444519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.450743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.450764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.450774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.456966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.456991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.457001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.463474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.463502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.463512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.469661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.469684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.469694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.475994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.476016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.476026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.482036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.482066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.482075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.488361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.488393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.488403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.494621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.494645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.494654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.175 [2024-04-26 20:52:34.500928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.175 [2024-04-26 20:52:34.500950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.175 [2024-04-26 20:52:34.500961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.176 [2024-04-26 20:52:34.507284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.176 [2024-04-26 20:52:34.507306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.176 [2024-04-26 20:52:34.507316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.176 [2024-04-26 20:52:34.513566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.176 [2024-04-26 20:52:34.513592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.176 [2024-04-26 20:52:34.513601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.519890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.519916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.519926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.526253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.526279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.526289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.532642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.532665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.532676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.539066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.539092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.539106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.545539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.545562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.545572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.552015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.552038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.552047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.558482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.558510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.558521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.564816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.564841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.564851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.571253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.571277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.571286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.577563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.577587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.577597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.584087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.584110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.584120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.590419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.590443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.590452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.596801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.596829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.596838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.603215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.603240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.603251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.609515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.609540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.609549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.616013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.616035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.616045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.622127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.622152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.622162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.628074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.628100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.628110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.634450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.634473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.634483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.640871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.640893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.640903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.647332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.647356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.647366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.653700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.653725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.436 [2024-04-26 20:52:34.653735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.436 [2024-04-26 20:52:34.659953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.436 [2024-04-26 20:52:34.659979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.659990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.668085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.668113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.668126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.675068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.675092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.675102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.681490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.681513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.681523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.687765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.687789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.687799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.694167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.694190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.694199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.699493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.699518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.699528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.704184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.704213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.704223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.708677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.708701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.708710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.712884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.712908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.712918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.717940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.717963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.717973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.723552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.723577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.723587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.729121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.729145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.729154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.734772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.734796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.734806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.740302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.740326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.740336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.745939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.745969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.745979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.751654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.751679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.751689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.757410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.757435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.757445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.763221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.763246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.763255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.769117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.769141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.769150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.437 [2024-04-26 20:52:34.774878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.437 [2024-04-26 20:52:34.774902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.437 [2024-04-26 20:52:34.774912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.698 [2024-04-26 20:52:34.780493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.698 [2024-04-26 20:52:34.780518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.698 [2024-04-26 20:52:34.780528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.698 [2024-04-26 20:52:34.786212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.698 [2024-04-26 20:52:34.786236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.698 [2024-04-26 20:52:34.786247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.698 [2024-04-26 20:52:34.791778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.698 [2024-04-26 20:52:34.791808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.698 [2024-04-26 20:52:34.791818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.698 [2024-04-26 20:52:34.797418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.698 [2024-04-26 20:52:34.797448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.698 [2024-04-26 20:52:34.797457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.698 [2024-04-26 20:52:34.802999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.803024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.803034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.808544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.808569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.808579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.814075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.814101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.814111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.819616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.819640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.819650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.825143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.825169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.825178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.830704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.830728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.830737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.836351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.836375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.836391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.841934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.841959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.841969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.847557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.847582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.847592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.853141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.853168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.853177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.858696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.858720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.858729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.864445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.864470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.864479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.870170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.870193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.870203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.875785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.875810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.875819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.881368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.881400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.881410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.887100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.887123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.887132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.892684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.892709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.892723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.898174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.898197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.898207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.903765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.903790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.903799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.909280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.909304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.909315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.914862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.914890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.914900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.920344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.920367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.927499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.927523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.927532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.934947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.934976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.934985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.941933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.941959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.941968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.699 [2024-04-26 20:52:34.949918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.699 [2024-04-26 20:52:34.949943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.699 [2024-04-26 20:52:34.949953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:34.957565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:34.957590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:34.957600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:34.965526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:34.965551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:34.965561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:34.972934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:34.972959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:34.972968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:34.981010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:34.981035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:34.981044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:34.988483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:34.988507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:34.988518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:34.995856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:34.995883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:34.995893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:35.003807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:35.003833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:35.003852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:35.011455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:35.011480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:35.011496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:35.019090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:35.019115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:35.019125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:35.026391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:35.026417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:35.026426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.700 [2024-04-26 20:52:35.033874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.700 [2024-04-26 20:52:35.033899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.700 [2024-04-26 20:52:35.033908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.041733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.041759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.041770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.047946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.047973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.047983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.054933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.054958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.054968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.061901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.061927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.061936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.068067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.068091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.068102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.074950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.074975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.074985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.082211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.082237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.082247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.089249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.089274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.089284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.961 [2024-04-26 20:52:35.095386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.961 [2024-04-26 20:52:35.095410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.961 [2024-04-26 20:52:35.095420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.102789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.102815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.102825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.110900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.110926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.110938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.119878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.119904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.119914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.127355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.127386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.127396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.134488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.134514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.134528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.141911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.141937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.141947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.149650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.149676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.149687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.157218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.157242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.157251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.163937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.163962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.163972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.170953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.170977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.170987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.177275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.177300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.177311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.183664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.183688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.183698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.191351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.191378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.191395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.197742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.197768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.197777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.204202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.204225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.204235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.210647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.210672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.210682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.217849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.217875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.217887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.224700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.224728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.224745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.229863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.229890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.229900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.234723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.234748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.234757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.239952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.239977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.239988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.246450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.246475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.246489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.251843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.251867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.251876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.257022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.257046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.257056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.262383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.262409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.262418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.267732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.267756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.267765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.962 [2024-04-26 20:52:35.272997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.962 [2024-04-26 20:52:35.273021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.962 [2024-04-26 20:52:35.273030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.963 [2024-04-26 20:52:35.278233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.963 [2024-04-26 20:52:35.278256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.963 [2024-04-26 20:52:35.278266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.963 [2024-04-26 20:52:35.283520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.963 [2024-04-26 20:52:35.283543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.963 [2024-04-26 20:52:35.283553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.963 [2024-04-26 20:52:35.288875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.963 [2024-04-26 20:52:35.288912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.963 [2024-04-26 20:52:35.288923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.963 [2024-04-26 20:52:35.294115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.963 [2024-04-26 20:52:35.294146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.963 [2024-04-26 20:52:35.294156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.963 [2024-04-26 20:52:35.299219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:16.963 [2024-04-26 20:52:35.299243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.963 [2024-04-26 20:52:35.299253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.304459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.304485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.309556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.309580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.309590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.314514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.314537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.314547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.319611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.319635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.319645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.324758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.324782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.324792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.329893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.329919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.329929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.334884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.334909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.334924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.339955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.339982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.339993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.344892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.344918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.344928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.349896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.349921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.349930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.355094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.355117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.355126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.360186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.360209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.360218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.365389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.365414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.365424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.370468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.370491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.370500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.375567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.375591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.375601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.380512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.380539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.380549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.385678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.385702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.385711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.390747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.390771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.390780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.395784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.395808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.395819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.224 [2024-04-26 20:52:35.400726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.224 [2024-04-26 20:52:35.400751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.224 [2024-04-26 20:52:35.400762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.405744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.405769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.405779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.410750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.410775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.410785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.417032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.417057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.417067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.422297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.422320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.422329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.427402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.427426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.427435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.432338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.432363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.432373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.437474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.437497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.437507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.442604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.442626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.442636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.447766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.447790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.447800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.453017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.453041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.453050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.225 [2024-04-26 20:52:35.458170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:35:17.225 [2024-04-26 20:52:35.458195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.225 [2024-04-26 20:52:35.458204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.225 00:35:17.225 Latency(us) 00:35:17.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.225 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:17.225 nvme0n1 : 2.00 4749.21 593.65 0.00 0.00 3366.37 883.87 9726.92 00:35:17.225 =================================================================================================================== 00:35:17.225 Total : 4749.21 593.65 0.00 0.00 3366.37 883.87 9726.92 00:35:17.225 0 00:35:17.225 20:52:35 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:17.225 20:52:35 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:17.225 20:52:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:17.225 20:52:35 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:17.225 | .driver_specific 00:35:17.225 | .nvme_error 00:35:17.225 | .status_code 00:35:17.225 | .command_transient_transport_error' 00:35:17.484 20:52:35 -- host/digest.sh@71 -- # (( 306 > 0 )) 00:35:17.484 20:52:35 -- host/digest.sh@73 -- # killprocess 3781472 00:35:17.484 20:52:35 -- common/autotest_common.sh@926 -- # '[' -z 3781472 ']' 00:35:17.484 20:52:35 -- common/autotest_common.sh@930 -- # kill -0 3781472 00:35:17.484 20:52:35 -- common/autotest_common.sh@931 -- # uname 00:35:17.484 20:52:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:17.484 20:52:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3781472 00:35:17.484 20:52:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:17.484 20:52:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:17.484 20:52:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3781472' 00:35:17.484 killing process with pid 3781472 00:35:17.484 20:52:35 -- common/autotest_common.sh@945 -- # kill 3781472 00:35:17.484 Received shutdown signal, test time was about 2.000000 seconds 00:35:17.484 00:35:17.484 Latency(us) 00:35:17.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.484 =================================================================================================================== 00:35:17.484 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:17.484 20:52:35 -- common/autotest_common.sh@950 -- # wait 3781472 00:35:17.741 20:52:36 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:35:17.741 20:52:36 -- host/digest.sh@54 -- # local rw bs qd 00:35:17.741 20:52:36 -- host/digest.sh@56 -- # rw=randwrite 00:35:17.741 20:52:36 -- host/digest.sh@56 -- # bs=4096 00:35:17.741 20:52:36 -- host/digest.sh@56 -- # qd=128 00:35:17.741 20:52:36 -- host/digest.sh@58 -- # bperfpid=3782098 00:35:17.741 20:52:36 -- host/digest.sh@60 -- # waitforlisten 3782098 /var/tmp/bperf.sock 00:35:17.741 20:52:36 -- common/autotest_common.sh@819 -- # '[' -z 3782098 ']' 00:35:17.741 20:52:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:17.741 20:52:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:17.741 20:52:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:17.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:17.741 20:52:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:17.741 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:35:17.741 20:52:36 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:18.009 [2024-04-26 20:52:36.110654] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:18.009 [2024-04-26 20:52:36.110771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782098 ] 00:35:18.009 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.009 [2024-04-26 20:52:36.223036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.009 [2024-04-26 20:52:36.319515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.574 20:52:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:18.574 20:52:36 -- common/autotest_common.sh@852 -- # return 0 00:35:18.574 20:52:36 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.574 20:52:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.834 20:52:36 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:18.834 20:52:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:18.834 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:35:18.834 20:52:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:18.834 20:52:36 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.834 20:52:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.094 nvme0n1 00:35:19.094 20:52:37 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:19.094 20:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:19.094 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:35:19.094 20:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:19.094 20:52:37 -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:19.094 20:52:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.094 Running I/O for 2 seconds... 00:35:19.094 [2024-04-26 20:52:37.426172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.094 [2024-04-26 20:52:37.426425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.094 [2024-04-26 20:52:37.426464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.094 [2024-04-26 20:52:37.435450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.094 [2024-04-26 20:52:37.435664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.094 [2024-04-26 20:52:37.435693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.359 [2024-04-26 20:52:37.444663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.359 [2024-04-26 20:52:37.444869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.359 [2024-04-26 20:52:37.444895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.359 [2024-04-26 20:52:37.453826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.359 [2024-04-26 20:52:37.454033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.359 [2024-04-26 20:52:37.454056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.359 [2024-04-26 20:52:37.463003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.359 [2024-04-26 20:52:37.463209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.359 [2024-04-26 20:52:37.463233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.359 [2024-04-26 20:52:37.472232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.359 [2024-04-26 20:52:37.472438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.359 [2024-04-26 20:52:37.472462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.359 [2024-04-26 20:52:37.481403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.359 [2024-04-26 20:52:37.481606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.481635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.490525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.490726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.490749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.499659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.499860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.499883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.508784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.508987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.509009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.517909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.518109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.518132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.527015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.527216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.527238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.536135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.536336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.536357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.545317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.545527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.545550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.554468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.554671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.554694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.563615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.563818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.563841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.572902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.573104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.573126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.582016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.582216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.582239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.591134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.591334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.591356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.600239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.600443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.600464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.609341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.609550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.609572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.618463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.618669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.618691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.627609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.627810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.627832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.636707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.636906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.636934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.645823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.646026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.646056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.654923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.655124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.655147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.664012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.664215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.664237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.673161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.673362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.673386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.682255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.682461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.682484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.360 [2024-04-26 20:52:37.691359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.360 [2024-04-26 20:52:37.691565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.360 [2024-04-26 20:52:37.691587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.700747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.700969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.701000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.710597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.710800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.710824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.719736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.719946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.719969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.728892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.729096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.729118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.738013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.738216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.738237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.747155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.747358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.747388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.756299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.756504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.756528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.765454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.765659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.765682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.774603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.774807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.774829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.783715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.783916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.783937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.792828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.793031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.793056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.801959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.802159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.802180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.811063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.811266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.811287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.820192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.820399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.820423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.830854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.831096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.831121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.840872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.841078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.841099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.850026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.850227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.850248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.859161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.859364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.859390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.868311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.868519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.868541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.877453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.877663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.877684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.886593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.886797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.886819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.895732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.895934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.895956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.904875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.905077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.905098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.719 [2024-04-26 20:52:37.914014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.719 [2024-04-26 20:52:37.914217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.719 [2024-04-26 20:52:37.914239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.923173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.923374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.923404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.932326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.932538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.932562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.941470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.941670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.941692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.950629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.950831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.950854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.959763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.959963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.959985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.968922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.969123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.969145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.978074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.978275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.978297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.987220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.987429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.987451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:37.996391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:37.996593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:37.996617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:38.005747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:38.005963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:38.005986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:38.015834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:38.016055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:38.016078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:38.026600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:38.026830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:38.026854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:38.037826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:38.038060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:38.038089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:38.049125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:38.049355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:38.049386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.720 [2024-04-26 20:52:38.059874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.720 [2024-04-26 20:52:38.060092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.720 [2024-04-26 20:52:38.060114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.982 [2024-04-26 20:52:38.069895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.982 [2024-04-26 20:52:38.070099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.982 [2024-04-26 20:52:38.070123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.982 [2024-04-26 20:52:38.079623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.982 [2024-04-26 20:52:38.079824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.982 [2024-04-26 20:52:38.079846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.982 [2024-04-26 20:52:38.089043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.982 [2024-04-26 20:52:38.089259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.982 [2024-04-26 20:52:38.089282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.982 [2024-04-26 20:52:38.099118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.982 [2024-04-26 20:52:38.099336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.982 [2024-04-26 20:52:38.099359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.982 [2024-04-26 20:52:38.109182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.982 [2024-04-26 20:52:38.109408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.982 [2024-04-26 20:52:38.109431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.982 [2024-04-26 20:52:38.119225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.982 [2024-04-26 20:52:38.119448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.982 [2024-04-26 20:52:38.119471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.982 [2024-04-26 20:52:38.128438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.982 [2024-04-26 20:52:38.128643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.982 [2024-04-26 20:52:38.128663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.982 [2024-04-26 20:52:38.137614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.982 [2024-04-26 20:52:38.137817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.137839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.146755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.146956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.146977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.155943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.156146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.156168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.165109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.165312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.165334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.174293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.174496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.174518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.183504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.183707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.183729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.192667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.192871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.192892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.201841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.202047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.202073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.211008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.211211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.211235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.220204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.220413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.220436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.229375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.229586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.229608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.238552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.238756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.238778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.247701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.247904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.247933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.256871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.257072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.257094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.266008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.266207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.266228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.275206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.275413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.275435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.284635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.284848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.284869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.293841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.294043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.294064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.303003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.303207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.303228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.312146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.312350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.312373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:19.983 [2024-04-26 20:52:38.321320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:19.983 [2024-04-26 20:52:38.321531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.983 [2024-04-26 20:52:38.321553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.244 [2024-04-26 20:52:38.330492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.244 [2024-04-26 20:52:38.330695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.244 [2024-04-26 20:52:38.330717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.244 [2024-04-26 20:52:38.339650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.244 [2024-04-26 20:52:38.339831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.244 [2024-04-26 20:52:38.339853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.244 [2024-04-26 20:52:38.348806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.244 [2024-04-26 20:52:38.348990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.244 [2024-04-26 20:52:38.349012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.244 [2024-04-26 20:52:38.357989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.244 [2024-04-26 20:52:38.358173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.358197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.367106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.367286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.367308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.376270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.376454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.376477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.385403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.385582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.385603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.394545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.394726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.394748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.403663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.403843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.403864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.412773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.412959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.412981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.421887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.422068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.422090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.431010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.431192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.431213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.440113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.440292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.440317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.449253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.449439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.449462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.458356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.458542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.458563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.467481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.467663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.467684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.476593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.476773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.476793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.485732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.485912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.485933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.494847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.495025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.495046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.503964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.504145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.504166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.513048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.513232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.513253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.522186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.522370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.522395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.531304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.531484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.531505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.540425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.540617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.540640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.549555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.549733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.549755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.558665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.558844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.558865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.567892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.568073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.568095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.245 [2024-04-26 20:52:38.577133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.245 [2024-04-26 20:52:38.577315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.245 [2024-04-26 20:52:38.577337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.586266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.586453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.586475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.595395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.595576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.595602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.604514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.604694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.604716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.613626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.613811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.613832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.622742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.622924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.622944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.631884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.632073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.632094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.641019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.641203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.641224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.650162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.650343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.650363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.659283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.659466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.659486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.668434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.668616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.668637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.677555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.677743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.677765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.686674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.686854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.686876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.695814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.695996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.696019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.704959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.705140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.705163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.714089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.714269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.714291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.723217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.723400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.723422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.732353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.732537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.732558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.741488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.741669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.741691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.750612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.750792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.750814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.759714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.759896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.759918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.768860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.769041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.769064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.777959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.778139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.778161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.787093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.787277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.787299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.796211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.796401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.796423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.805322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.805509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.805530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.814452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.814632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.814654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.823584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.823763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.823785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.832701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.832884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.832912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.508 [2024-04-26 20:52:38.841816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.508 [2024-04-26 20:52:38.841996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.508 [2024-04-26 20:52:38.842017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.770 [2024-04-26 20:52:38.850941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.770 [2024-04-26 20:52:38.851122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.770 [2024-04-26 20:52:38.851143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.770 [2024-04-26 20:52:38.860058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.770 [2024-04-26 20:52:38.860240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.770 [2024-04-26 20:52:38.860261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.770 [2024-04-26 20:52:38.869191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.770 [2024-04-26 20:52:38.869375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.770 [2024-04-26 20:52:38.869400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.770 [2024-04-26 20:52:38.878299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.770 [2024-04-26 20:52:38.878478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.770 [2024-04-26 20:52:38.878501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.770 [2024-04-26 20:52:38.887419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.887601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.887623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.896532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.896712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.896734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.905652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.905832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.905853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.914760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.914942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.914963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.923882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.924063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.924084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.932996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.933175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.933197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.942114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.942291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.942314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.951234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.951419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.951442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.960356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.960544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.960566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.969496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.969678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.969700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.978615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.978796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.978818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.987732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.987913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.987939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:38.996847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:38.997025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:38.997047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:39.005961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:39.006143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:39.006166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:39.015085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:39.015266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:39.015288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:39.024255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:39.024448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:39.024469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:39.033399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:39.033579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:39.033600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:39.042557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:39.042736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:39.042758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:39.051677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:39.051857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:39.051878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:39.060799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:39.060980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:39.061000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.771 [2024-04-26 20:52:39.069928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.771 [2024-04-26 20:52:39.070113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.771 [2024-04-26 20:52:39.070139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.772 [2024-04-26 20:52:39.079077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.772 [2024-04-26 20:52:39.079257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.772 [2024-04-26 20:52:39.079281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.772 [2024-04-26 20:52:39.088199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.772 [2024-04-26 20:52:39.088384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.772 [2024-04-26 20:52:39.088406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.772 [2024-04-26 20:52:39.097308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.772 [2024-04-26 20:52:39.097488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.772 [2024-04-26 20:52:39.097510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:20.772 [2024-04-26 20:52:39.106680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:20.772 [2024-04-26 20:52:39.106880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.772 [2024-04-26 20:52:39.106904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.032 [2024-04-26 20:52:39.117488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.032 [2024-04-26 20:52:39.117687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.032 [2024-04-26 20:52:39.117709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.032 [2024-04-26 20:52:39.126743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.032 [2024-04-26 20:52:39.126925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.032 [2024-04-26 20:52:39.126947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.032 [2024-04-26 20:52:39.135877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.032 [2024-04-26 20:52:39.136056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.032 [2024-04-26 20:52:39.136080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.032 [2024-04-26 20:52:39.145025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.032 [2024-04-26 20:52:39.145210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.032 [2024-04-26 20:52:39.145242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.032 [2024-04-26 20:52:39.154165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.154345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.154368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.163297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.163477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.163500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.172425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.172608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.172628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.181540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.181719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.181740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.190641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.190827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.190849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.199773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.199953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.199974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.208892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.209074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.209098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.219138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.219349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.219373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.229349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.229537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.229558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.238485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.238664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.238685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.247596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.247772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.247793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.256704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.256881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.256903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.265812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.265990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.266013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.274959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.275136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.275158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.284237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.284419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.284442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.293349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.293531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.293552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.302465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.302642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.302664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.311563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.311739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.311761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.320659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.320835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.320857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.329776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.329953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.329975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.338858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.339035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.033 [2024-04-26 20:52:39.339058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.033 [2024-04-26 20:52:39.347969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.033 [2024-04-26 20:52:39.348146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.034 [2024-04-26 20:52:39.348169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.034 [2024-04-26 20:52:39.357071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.034 [2024-04-26 20:52:39.357246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.034 [2024-04-26 20:52:39.357268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.034 [2024-04-26 20:52:39.366184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.034 [2024-04-26 20:52:39.366361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.034 [2024-04-26 20:52:39.366388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.293 [2024-04-26 20:52:39.375313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.293 [2024-04-26 20:52:39.375492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.293 [2024-04-26 20:52:39.375513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.293 [2024-04-26 20:52:39.384441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.293 [2024-04-26 20:52:39.384619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.293 [2024-04-26 20:52:39.384646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.293 [2024-04-26 20:52:39.393560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.293 [2024-04-26 20:52:39.393737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.293 [2024-04-26 20:52:39.393759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.293 [2024-04-26 20:52:39.402682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.293 [2024-04-26 20:52:39.402862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.293 [2024-04-26 20:52:39.402884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.293 [2024-04-26 20:52:39.411809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:35:21.293 [2024-04-26 20:52:39.411963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.293 [2024-04-26 20:52:39.411985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.293 00:35:21.293 Latency(us) 00:35:21.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.293 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.293 nvme0n1 : 2.00 27512.57 107.47 0.00 0.00 4645.23 4190.85 12141.41 00:35:21.293 =================================================================================================================== 00:35:21.293 Total : 27512.57 107.47 0.00 0.00 4645.23 4190.85 12141.41 00:35:21.293 0 00:35:21.293 20:52:39 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:21.293 20:52:39 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:21.293 | .driver_specific 00:35:21.293 | .nvme_error 00:35:21.293 | .status_code 00:35:21.293 | .command_transient_transport_error' 00:35:21.293 20:52:39 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:21.293 20:52:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:21.293 20:52:39 -- host/digest.sh@71 -- # (( 216 > 0 )) 00:35:21.293 20:52:39 -- host/digest.sh@73 -- # killprocess 3782098 00:35:21.293 20:52:39 -- common/autotest_common.sh@926 -- # '[' -z 3782098 ']' 00:35:21.293 20:52:39 -- common/autotest_common.sh@930 -- # kill -0 3782098 00:35:21.293 20:52:39 -- common/autotest_common.sh@931 -- # uname 00:35:21.293 20:52:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:21.293 20:52:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3782098 00:35:21.293 20:52:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:21.293 20:52:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:21.293 20:52:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3782098' 00:35:21.293 killing process with pid 3782098 00:35:21.293 20:52:39 -- common/autotest_common.sh@945 -- # kill 3782098 00:35:21.293 Received shutdown signal, test time was about 2.000000 seconds 00:35:21.293 00:35:21.293 Latency(us) 00:35:21.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.293 =================================================================================================================== 00:35:21.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.293 20:52:39 -- common/autotest_common.sh@950 -- # wait 3782098 00:35:21.863 20:52:39 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:35:21.863 20:52:39 -- host/digest.sh@54 -- # local rw bs qd 00:35:21.863 20:52:39 -- host/digest.sh@56 -- # rw=randwrite 00:35:21.863 20:52:39 -- host/digest.sh@56 -- # bs=131072 00:35:21.863 20:52:39 -- host/digest.sh@56 -- # qd=16 00:35:21.863 20:52:39 -- host/digest.sh@58 -- # bperfpid=3783003 00:35:21.863 20:52:39 -- host/digest.sh@60 -- # waitforlisten 3783003 /var/tmp/bperf.sock 00:35:21.863 20:52:39 -- common/autotest_common.sh@819 -- # '[' -z 3783003 ']' 00:35:21.863 20:52:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:21.863 20:52:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:21.863 20:52:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:21.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:21.863 20:52:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:21.863 20:52:39 -- common/autotest_common.sh@10 -- # set +x 00:35:21.863 20:52:39 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:21.863 [2024-04-26 20:52:40.046609] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:21.863 [2024-04-26 20:52:40.046728] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783003 ] 00:35:21.863 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:21.863 Zero copy mechanism will not be used. 00:35:21.863 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.863 [2024-04-26 20:52:40.160020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.123 [2024-04-26 20:52:40.249359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.693 20:52:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:22.693 20:52:40 -- common/autotest_common.sh@852 -- # return 0 00:35:22.693 20:52:40 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.693 20:52:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.693 20:52:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:22.693 20:52:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.693 20:52:40 -- common/autotest_common.sh@10 -- # set +x 00:35:22.693 20:52:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.693 20:52:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.693 20:52:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.950 nvme0n1 00:35:22.950 20:52:41 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:22.950 20:52:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.950 20:52:41 -- common/autotest_common.sh@10 -- # set +x 00:35:22.950 20:52:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.950 20:52:41 -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:22.950 20:52:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.209 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:23.209 Zero copy mechanism will not be used. 00:35:23.209 Running I/O for 2 seconds... 00:35:23.209 [2024-04-26 20:52:41.345935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.209 [2024-04-26 20:52:41.346219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.209 [2024-04-26 20:52:41.346257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.209 [2024-04-26 20:52:41.354572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.209 [2024-04-26 20:52:41.354873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.209 [2024-04-26 20:52:41.354903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.209 [2024-04-26 20:52:41.364248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.209 [2024-04-26 20:52:41.364610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.209 [2024-04-26 20:52:41.364638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.209 [2024-04-26 20:52:41.372710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.209 [2024-04-26 20:52:41.372837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.209 [2024-04-26 20:52:41.372863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.209 [2024-04-26 20:52:41.380522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.209 [2024-04-26 20:52:41.380691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.209 [2024-04-26 20:52:41.380716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.209 [2024-04-26 20:52:41.389740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.390004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.390029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.399165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.399408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.399433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.408664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.408883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.408909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.416708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.416894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.416918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.424331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.424562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.424590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.432613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.432761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.432787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.440670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.440861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.440886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.449325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.449453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.449477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.457301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.457449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.457472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.465957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.466195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.466220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.473831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.474007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.474032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.481753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.481927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.481950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.490175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.490441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.490465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.498595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.498755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.498780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.506788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.507047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.507070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.515376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.515600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.515625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.523547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.523789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.523813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.532080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.532338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.532363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.540140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.540302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.540326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.210 [2024-04-26 20:52:41.548174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.210 [2024-04-26 20:52:41.548361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.210 [2024-04-26 20:52:41.548401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.472 [2024-04-26 20:52:41.556258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.472 [2024-04-26 20:52:41.556404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.472 [2024-04-26 20:52:41.556429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.472 [2024-04-26 20:52:41.563886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.472 [2024-04-26 20:52:41.564095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.472 [2024-04-26 20:52:41.564122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.472 [2024-04-26 20:52:41.574067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.472 [2024-04-26 20:52:41.574279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.472 [2024-04-26 20:52:41.574302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.472 [2024-04-26 20:52:41.582143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.472 [2024-04-26 20:52:41.582320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.472 [2024-04-26 20:52:41.582344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.472 [2024-04-26 20:52:41.590224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.472 [2024-04-26 20:52:41.590542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.472 [2024-04-26 20:52:41.590565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.472 [2024-04-26 20:52:41.598369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.472 [2024-04-26 20:52:41.598550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.472 [2024-04-26 20:52:41.598573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.472 [2024-04-26 20:52:41.606005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.472 [2024-04-26 20:52:41.606165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.472 [2024-04-26 20:52:41.606188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.472 [2024-04-26 20:52:41.613954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.472 [2024-04-26 20:52:41.614180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.472 [2024-04-26 20:52:41.614204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.621521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.621743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.621771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.628751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.628881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.628905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.636528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.636720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.636746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.644881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.645033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.645056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.652242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.652390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.652417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.659798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.659902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.659925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.667128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.667302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.667326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.674399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.674571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.674595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.681686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.681829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.681851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.689455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.689625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.689650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.696567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.696712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.696736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.703958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.704174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.704198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.711840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.711986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.712010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.719107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.719298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.719326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.727058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.727218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.727242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.734867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.735049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.735072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.741824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.741955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.741979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.748632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.748707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.748730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.754202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.754441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.754465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.759822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.759925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.759949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.764967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.765114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.765139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.770947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.771047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.771070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.776718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.776857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.776880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.782586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.782776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.782800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.788774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.789015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.789039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.795881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.795991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.796016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.473 [2024-04-26 20:52:41.802646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.473 [2024-04-26 20:52:41.802740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.473 [2024-04-26 20:52:41.802764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-04-26 20:52:41.809469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.474 [2024-04-26 20:52:41.809591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-04-26 20:52:41.809615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.733 [2024-04-26 20:52:41.816232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.733 [2024-04-26 20:52:41.816325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.733 [2024-04-26 20:52:41.816347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.733 [2024-04-26 20:52:41.822969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.733 [2024-04-26 20:52:41.823112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.733 [2024-04-26 20:52:41.823134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.829623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.829727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.829749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.836591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.836821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.836843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.843334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.843443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.843465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.849862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.849980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.850001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.856595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.856721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.856742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.863019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.863121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.863142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.869734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.869829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.869855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.876444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.876546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.876570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.883362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.883498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.883522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.890276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.890512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.890536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.896968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.897113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.897136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.903656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.903829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.903850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.910273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.910348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.910371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.917176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.917302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.917324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.923939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.924074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.924095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.930657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.930757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.930780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.937423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.937549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.937573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.944093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.944333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.944357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.950966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.951085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.951108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.957809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.957916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.957940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.964597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.734 [2024-04-26 20:52:41.964732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-04-26 20:52:41.964755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-04-26 20:52:41.971303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:41.971409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:41.971431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:41.978170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:41.978286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:41.978310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:41.985044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:41.985172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:41.985199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:41.992109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:41.992221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:41.992252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:41.998949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:41.999058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:41.999088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.005751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.005949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.005971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.012534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.012633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.012656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.019310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.019423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.019446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.026077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.026196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.026219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.032957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.033077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.033105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.039740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.039848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.039872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.046710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.046820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.046844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.053506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.053738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.053763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.060251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.060425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.060453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.067014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.067111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.067135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.735 [2024-04-26 20:52:42.073834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.735 [2024-04-26 20:52:42.073931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-04-26 20:52:42.073953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.080722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.080829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.080851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.087581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.087703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.087725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.094466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.094587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.094610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.101398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.101514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.101542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.108159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.108420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.108443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.114870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.115041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.115063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.121815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.121916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.121939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.128809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.128958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.128981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.135598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.135708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.135731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.142406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.142506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.142529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.149221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.149353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.149375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.156138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.156255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.156278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.162976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.163204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.163227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.169624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.169750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.169772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.176373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.176510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.176533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.183002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.183120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.183143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.189918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.190045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.190067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.196851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.196961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.196983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.203681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.203803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.203825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.210456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.210576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.210599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.996 [2024-04-26 20:52:42.217309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.996 [2024-04-26 20:52:42.217544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.996 [2024-04-26 20:52:42.217575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.224118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.224281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.224303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.230856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.230951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.230973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.237589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.237665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.237688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.244238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.244342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.244364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.251043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.251149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.251176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.257781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.257886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.257911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.264872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.265004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.265028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.271772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.271943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.271967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.278566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.278729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.278754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.285478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.285573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.285599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.292390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.292502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.292527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.299164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.299271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.299294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.306193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.306311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.306334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.313015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.313145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.313169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.319830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.319951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.319974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.326666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.326919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.326943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.997 [2024-04-26 20:52:42.333587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:23.997 [2024-04-26 20:52:42.333726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.997 [2024-04-26 20:52:42.333753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.340275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.340403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.340427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.347291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.347397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.347421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.354217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.354341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.354363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.361084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.361159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.361182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.368012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.368135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.368158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.374886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.374998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.375021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.381748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.381847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.381872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.388505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.388651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.388673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.395249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.395352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.395375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.402193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.402341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.402365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.409016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.409150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.409172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.415955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.416084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.416107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.422829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.422961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.422985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.429747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.429862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.429885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.436513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.436678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.436708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.443494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.443636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.443658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.450416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.450610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.450632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.457263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.457360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.457386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.464066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.464180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.464201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.470800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.470899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.470921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.477313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.477481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.477504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.484391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.484540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.484574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.491098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.491214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.259 [2024-04-26 20:52:42.491240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.259 [2024-04-26 20:52:42.497419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.259 [2024-04-26 20:52:42.497537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.497560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.502679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.502824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.502853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.508247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.508354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.508378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.513531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.513680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.513703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.518757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.518925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.518947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.523989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.524100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.524124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.529481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.529658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.529681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.536457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.536627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.536657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.543202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.543324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.543349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.550988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.551171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.551196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.559880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.560109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.560137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.566914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.567134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.567158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.573058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.573267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.573292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.578596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.578759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.578783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.585186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.585330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.585352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.590745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.590931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.590953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.260 [2024-04-26 20:52:42.597931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.260 [2024-04-26 20:52:42.598109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.260 [2024-04-26 20:52:42.598132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.520 [2024-04-26 20:52:42.605814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.520 [2024-04-26 20:52:42.605942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.520 [2024-04-26 20:52:42.605967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.520 [2024-04-26 20:52:42.611891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.520 [2024-04-26 20:52:42.612063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.612086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.617163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.617291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.617316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.622778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.622936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.622958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.629337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.629472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.629496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.635599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.635801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.635825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.642755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.642975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.642999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.650829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.651021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.651044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.656158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.656280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.656304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.661281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.661398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.661420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.666722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.666864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.666886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.672735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.672905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.672930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.678023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.678113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.678135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.683285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.683427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.683450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.688532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.688649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.688672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.693575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.693671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.693694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.699003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.699122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.699144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.703821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.703938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.703960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.708918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.709014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.709036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.714134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.714263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.714290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.719231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.719388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.719411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.724255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.724455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.724478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.729509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.729713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.729736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.734451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.734681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.734704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.739596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.739734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.744309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.744449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.744471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.750053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.750142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.750165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.756645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.521 [2024-04-26 20:52:42.756786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.521 [2024-04-26 20:52:42.756808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.521 [2024-04-26 20:52:42.761800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.761915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.761940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.768424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.768622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.768644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.774805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.774991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.775016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.781374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.781575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.781599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.786601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.786717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.786741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.791680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.791800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.791823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.796775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.796941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.796964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.801549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.801716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.801739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.806408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.806541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.806568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.811788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.811933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.811955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.816592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.816724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.816754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.821567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.821702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.821724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.826804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.826963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.826984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.831796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.831943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.831965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.836700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.836824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.836847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.841919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.842078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.842101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.847049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.847212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.847235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.852146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.852256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.852280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.522 [2024-04-26 20:52:42.857482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.522 [2024-04-26 20:52:42.857666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.522 [2024-04-26 20:52:42.857690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.785 [2024-04-26 20:52:42.862622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.785 [2024-04-26 20:52:42.862747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.785 [2024-04-26 20:52:42.862769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.785 [2024-04-26 20:52:42.867515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.785 [2024-04-26 20:52:42.867658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.785 [2024-04-26 20:52:42.867681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.785 [2024-04-26 20:52:42.872945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.785 [2024-04-26 20:52:42.873107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.785 [2024-04-26 20:52:42.873131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.785 [2024-04-26 20:52:42.879768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.785 [2024-04-26 20:52:42.879985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.785 [2024-04-26 20:52:42.880011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.785 [2024-04-26 20:52:42.886842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.785 [2024-04-26 20:52:42.886970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.785 [2024-04-26 20:52:42.886993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.785 [2024-04-26 20:52:42.896046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.785 [2024-04-26 20:52:42.896222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.785 [2024-04-26 20:52:42.896246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.785 [2024-04-26 20:52:42.902472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.785 [2024-04-26 20:52:42.902648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.785 [2024-04-26 20:52:42.902681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.785 [2024-04-26 20:52:42.908009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.785 [2024-04-26 20:52:42.908157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.908181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.913642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.913765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.913787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.919268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.919415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.919439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.924943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.925106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.925132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.930589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.930694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.930716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.935927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.936127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.936149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.941607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.941725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.941750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.946714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.946852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.946876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.952351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.952479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.952502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.957438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.957528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.957553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.963298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.963441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.963464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.970067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.970280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.970303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.977300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.977504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.977527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.985787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.985953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.985978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.992797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.992969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.992994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:42.998317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:42.998477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:42.998501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:43.003556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:43.003683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:43.003706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:43.009638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:43.009764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:43.009786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:43.014886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:43.015096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:43.015117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:43.020131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:43.020245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:43.020267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.786 [2024-04-26 20:52:43.025169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.786 [2024-04-26 20:52:43.025291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.786 [2024-04-26 20:52:43.025313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.030479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.030645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.030672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.036425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.036650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.036675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.042231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.042365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.042394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.048375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.048566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.048589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.054073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.054208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.054230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.059233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.059341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.059364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.064538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.064645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.064667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.070339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.070458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.070480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.076231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.076358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.076386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.081506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.081709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.081734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.087259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.087371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.087401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.092614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.092784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.092809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.097550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.097704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.097726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.102500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.102611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.102636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.108280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.108482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.108505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.115222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.115342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.115367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.787 [2024-04-26 20:52:43.121961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:24.787 [2024-04-26 20:52:43.122058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.787 [2024-04-26 20:52:43.122081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.128508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.128642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.128669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.134906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.135042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.135067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.141605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.141748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.141771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.147982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.148138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.148161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.154354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.154441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.154463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.160717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.160811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.160833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.167172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.167287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.167311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.173546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.173678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.173700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.179942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.180064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.180087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.186417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.186527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.186552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.192707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.192820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.192866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.199469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.199577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.199601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.205841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.205918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.205940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.212421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.212525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.212548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.219429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.219531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.219558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.225713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.225810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.225832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.230290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.230473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.230496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.235118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.235262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.235286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.240014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.240152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.240180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.244461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.244577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.244604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.248909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.248993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.249015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.253673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.253795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.253821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.258013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.258120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.258142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.262511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.048 [2024-04-26 20:52:43.262635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.048 [2024-04-26 20:52:43.262658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.048 [2024-04-26 20:52:43.266919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.267037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.267060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.271583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.271717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.271740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.276073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.276184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.276208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.280610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.280729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.280752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.285250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.285316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.285340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.289786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.289949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.289972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.294203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.294318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.294341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.298742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.298888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.298911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.303310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.303432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.303454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.307772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.307877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.307898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.312315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.312447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.312469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.316855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.316939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.316961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.321091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.321246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.321269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.325866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.326016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.326039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.049 [2024-04-26 20:52:43.330321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:25.049 [2024-04-26 20:52:43.330464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.049 [2024-04-26 20:52:43.330491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.049 00:35:25.049 Latency(us) 00:35:25.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.049 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:25.049 nvme0n1 : 2.00 4793.48 599.19 0.00 0.00 3333.57 1983.33 10554.75 00:35:25.049 =================================================================================================================== 00:35:25.049 Total : 4793.48 599.19 0.00 0.00 3333.57 1983.33 10554.75 00:35:25.049 0 00:35:25.049 20:52:43 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:25.049 20:52:43 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:25.049 20:52:43 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:25.049 | .driver_specific 00:35:25.049 | .nvme_error 00:35:25.049 | .status_code 00:35:25.049 | .command_transient_transport_error' 00:35:25.049 20:52:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:25.307 20:52:43 -- host/digest.sh@71 -- # (( 309 > 0 )) 00:35:25.307 20:52:43 -- host/digest.sh@73 -- # killprocess 3783003 00:35:25.307 20:52:43 -- common/autotest_common.sh@926 -- # '[' -z 3783003 ']' 00:35:25.307 20:52:43 -- common/autotest_common.sh@930 -- # kill -0 3783003 00:35:25.307 20:52:43 -- common/autotest_common.sh@931 -- # uname 00:35:25.307 20:52:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:25.307 20:52:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3783003 00:35:25.307 20:52:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:25.307 20:52:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:25.307 20:52:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3783003' 00:35:25.307 killing process with pid 3783003 00:35:25.307 20:52:43 -- common/autotest_common.sh@945 -- # kill 3783003 00:35:25.307 Received shutdown signal, test time was about 2.000000 seconds 00:35:25.307 00:35:25.307 Latency(us) 00:35:25.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.307 =================================================================================================================== 00:35:25.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.307 20:52:43 -- common/autotest_common.sh@950 -- # wait 3783003 00:35:25.565 20:52:43 -- host/digest.sh@115 -- # killprocess 3780525 00:35:25.565 20:52:43 -- common/autotest_common.sh@926 -- # '[' -z 3780525 ']' 00:35:25.565 20:52:43 -- common/autotest_common.sh@930 -- # kill -0 3780525 00:35:25.565 20:52:43 -- common/autotest_common.sh@931 -- # uname 00:35:25.565 20:52:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:25.565 20:52:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3780525 00:35:25.823 20:52:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:25.823 20:52:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:25.823 20:52:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3780525' 00:35:25.823 killing process with pid 3780525 00:35:25.823 20:52:43 -- common/autotest_common.sh@945 -- # kill 3780525 00:35:25.823 20:52:43 -- common/autotest_common.sh@950 -- # wait 3780525 00:35:26.083 00:35:26.083 real 0m16.881s 00:35:26.083 user 0m32.431s 00:35:26.083 sys 0m3.243s 00:35:26.083 20:52:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:26.083 20:52:44 -- common/autotest_common.sh@10 -- # set +x 00:35:26.083 ************************************ 00:35:26.083 END TEST nvmf_digest_error 00:35:26.083 ************************************ 00:35:26.083 20:52:44 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:35:26.083 20:52:44 -- host/digest.sh@139 -- # nvmftestfini 00:35:26.083 20:52:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:26.083 20:52:44 -- nvmf/common.sh@116 -- # sync 00:35:26.083 20:52:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:26.083 20:52:44 -- nvmf/common.sh@119 -- # set +e 00:35:26.083 20:52:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:26.083 20:52:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:26.083 rmmod nvme_tcp 00:35:26.342 rmmod nvme_fabrics 00:35:26.342 rmmod nvme_keyring 00:35:26.342 20:52:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:26.342 20:52:44 -- nvmf/common.sh@123 -- # set -e 00:35:26.342 20:52:44 -- nvmf/common.sh@124 -- # return 0 00:35:26.342 20:52:44 -- nvmf/common.sh@477 -- # '[' -n 3780525 ']' 00:35:26.342 20:52:44 -- nvmf/common.sh@478 -- # killprocess 3780525 00:35:26.342 20:52:44 -- common/autotest_common.sh@926 -- # '[' -z 3780525 ']' 00:35:26.342 20:52:44 -- common/autotest_common.sh@930 -- # kill -0 3780525 00:35:26.342 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3780525) - No such process 00:35:26.342 20:52:44 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3780525 is not found' 00:35:26.342 Process with pid 3780525 is not found 00:35:26.342 20:52:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:35:26.342 20:52:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:26.342 20:52:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:26.342 20:52:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:26.342 20:52:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:26.342 20:52:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.342 20:52:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:26.342 20:52:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.251 20:52:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:28.251 00:35:28.251 real 1m13.611s 00:35:28.251 user 1m42.683s 00:35:28.251 sys 0m11.743s 00:35:28.251 20:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:28.251 20:52:46 -- common/autotest_common.sh@10 -- # set +x 00:35:28.251 ************************************ 00:35:28.251 END TEST nvmf_digest 00:35:28.251 ************************************ 00:35:28.251 20:52:46 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:35:28.251 20:52:46 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:35:28.251 20:52:46 -- nvmf/nvmf.sh@119 -- # [[ phy-fallback == phy ]] 00:35:28.251 20:52:46 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:35:28.251 20:52:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:28.251 20:52:46 -- common/autotest_common.sh@10 -- # set +x 00:35:28.251 20:52:46 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:35:28.251 00:35:28.251 real 21m59.533s 00:35:28.251 user 60m53.327s 00:35:28.251 sys 4m41.973s 00:35:28.251 20:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:28.251 20:52:46 -- common/autotest_common.sh@10 -- # set +x 00:35:28.251 ************************************ 00:35:28.251 END TEST nvmf_tcp 00:35:28.251 ************************************ 00:35:28.510 20:52:46 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:35:28.510 20:52:46 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:28.510 20:52:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:28.510 20:52:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:28.510 20:52:46 -- common/autotest_common.sh@10 -- # set +x 00:35:28.510 ************************************ 00:35:28.510 START TEST spdkcli_nvmf_tcp 00:35:28.510 ************************************ 00:35:28.510 20:52:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:28.510 * Looking for test storage... 00:35:28.510 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:35:28.510 20:52:46 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:35:28.510 20:52:46 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:28.510 20:52:46 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:35:28.510 20:52:46 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.510 20:52:46 -- nvmf/common.sh@7 -- # uname -s 00:35:28.510 20:52:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.510 20:52:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.510 20:52:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.510 20:52:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.510 20:52:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.510 20:52:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.510 20:52:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.510 20:52:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.511 20:52:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.511 20:52:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.511 20:52:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:35:28.511 20:52:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:35:28.511 20:52:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.511 20:52:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.511 20:52:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:28.511 20:52:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:28.511 20:52:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.511 20:52:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.511 20:52:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.511 20:52:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.511 20:52:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.511 20:52:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.511 20:52:46 -- paths/export.sh@5 -- # export PATH 00:35:28.511 20:52:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.511 20:52:46 -- nvmf/common.sh@46 -- # : 0 00:35:28.511 20:52:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:28.511 20:52:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:28.511 20:52:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:28.511 20:52:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.511 20:52:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.511 20:52:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:28.511 20:52:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:28.511 20:52:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:28.511 20:52:46 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:28.511 20:52:46 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:28.511 20:52:46 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:28.511 20:52:46 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:28.511 20:52:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:28.511 20:52:46 -- common/autotest_common.sh@10 -- # set +x 00:35:28.511 20:52:46 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:28.511 20:52:46 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3784362 00:35:28.511 20:52:46 -- spdkcli/common.sh@34 -- # waitforlisten 3784362 00:35:28.511 20:52:46 -- common/autotest_common.sh@819 -- # '[' -z 3784362 ']' 00:35:28.511 20:52:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.511 20:52:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:28.511 20:52:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.511 20:52:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:28.511 20:52:46 -- common/autotest_common.sh@10 -- # set +x 00:35:28.511 20:52:46 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:28.511 [2024-04-26 20:52:46.774020] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:28.511 [2024-04-26 20:52:46.774133] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784362 ] 00:35:28.511 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.771 [2024-04-26 20:52:46.887215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:28.771 [2024-04-26 20:52:46.983429] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:28.771 [2024-04-26 20:52:46.983741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.771 [2024-04-26 20:52:46.983750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.340 20:52:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:29.340 20:52:47 -- common/autotest_common.sh@852 -- # return 0 00:35:29.340 20:52:47 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:29.340 20:52:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:29.340 20:52:47 -- common/autotest_common.sh@10 -- # set +x 00:35:29.340 20:52:47 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:29.340 20:52:47 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:29.340 20:52:47 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:29.340 20:52:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:29.340 20:52:47 -- common/autotest_common.sh@10 -- # set +x 00:35:29.340 20:52:47 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:29.340 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:29.340 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:29.340 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:29.340 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:29.340 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:29.340 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:29.340 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:29.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:29.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:29.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:29.341 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:29.341 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:29.341 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:29.341 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:29.341 ' 00:35:29.599 [2024-04-26 20:52:47.823051] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:35:32.136 [2024-04-26 20:52:49.876409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.702 [2024-04-26 20:52:51.038298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:35.235 [2024-04-26 20:52:53.169178] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:37.142 [2024-04-26 20:52:54.999867] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:38.080 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:38.080 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:38.080 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:38.080 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:38.080 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:38.080 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:38.080 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:38.080 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:38.080 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:38.080 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:38.080 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:38.080 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:38.340 20:52:56 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:38.340 20:52:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:38.340 20:52:56 -- common/autotest_common.sh@10 -- # set +x 00:35:38.340 20:52:56 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:38.340 20:52:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:38.340 20:52:56 -- common/autotest_common.sh@10 -- # set +x 00:35:38.340 20:52:56 -- spdkcli/nvmf.sh@69 -- # check_match 00:35:38.340 20:52:56 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:38.599 20:52:56 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:38.599 20:52:56 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:38.599 20:52:56 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:38.599 20:52:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:38.599 20:52:56 -- common/autotest_common.sh@10 -- # set +x 00:35:38.857 20:52:56 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:38.857 20:52:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:38.857 20:52:56 -- common/autotest_common.sh@10 -- # set +x 00:35:38.857 20:52:56 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:38.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:38.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:38.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:38.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:38.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:38.857 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:38.857 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:38.857 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:38.857 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:38.857 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:38.857 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:38.857 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:38.857 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:38.857 ' 00:35:44.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:44.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:44.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:44.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:44.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:44.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:44.132 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:44.132 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:44.132 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:44.132 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:44.132 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:44.132 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:44.132 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:44.132 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:44.132 20:53:01 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:44.132 20:53:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:44.132 20:53:01 -- common/autotest_common.sh@10 -- # set +x 00:35:44.132 20:53:01 -- spdkcli/nvmf.sh@90 -- # killprocess 3784362 00:35:44.132 20:53:01 -- common/autotest_common.sh@926 -- # '[' -z 3784362 ']' 00:35:44.132 20:53:01 -- common/autotest_common.sh@930 -- # kill -0 3784362 00:35:44.132 20:53:01 -- common/autotest_common.sh@931 -- # uname 00:35:44.132 20:53:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:44.132 20:53:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3784362 00:35:44.132 20:53:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:44.132 20:53:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:44.132 20:53:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3784362' 00:35:44.132 killing process with pid 3784362 00:35:44.132 20:53:01 -- common/autotest_common.sh@945 -- # kill 3784362 00:35:44.132 [2024-04-26 20:53:01.994221] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:35:44.132 20:53:01 -- common/autotest_common.sh@950 -- # wait 3784362 00:35:44.132 20:53:02 -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:44.132 20:53:02 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:44.132 20:53:02 -- spdkcli/common.sh@13 -- # '[' -n 3784362 ']' 00:35:44.132 20:53:02 -- spdkcli/common.sh@14 -- # killprocess 3784362 00:35:44.132 20:53:02 -- common/autotest_common.sh@926 -- # '[' -z 3784362 ']' 00:35:44.132 20:53:02 -- common/autotest_common.sh@930 -- # kill -0 3784362 00:35:44.132 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3784362) - No such process 00:35:44.132 20:53:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3784362 is not found' 00:35:44.132 Process with pid 3784362 is not found 00:35:44.132 20:53:02 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:44.132 20:53:02 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:44.132 20:53:02 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:44.132 00:35:44.132 real 0m15.830s 00:35:44.132 user 0m32.049s 00:35:44.132 sys 0m0.719s 00:35:44.132 20:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:44.132 20:53:02 -- common/autotest_common.sh@10 -- # set +x 00:35:44.132 ************************************ 00:35:44.132 END TEST spdkcli_nvmf_tcp 00:35:44.132 ************************************ 00:35:44.390 20:53:02 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:44.390 20:53:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:44.390 20:53:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:44.390 20:53:02 -- common/autotest_common.sh@10 -- # set +x 00:35:44.390 ************************************ 00:35:44.390 START TEST nvmf_identify_passthru 00:35:44.390 ************************************ 00:35:44.390 20:53:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:44.390 * Looking for test storage... 00:35:44.390 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:35:44.390 20:53:02 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:35:44.390 20:53:02 -- nvmf/common.sh@7 -- # uname -s 00:35:44.390 20:53:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:44.390 20:53:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:44.390 20:53:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:44.390 20:53:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:44.390 20:53:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:44.390 20:53:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:44.390 20:53:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:44.390 20:53:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:44.390 20:53:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:44.390 20:53:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:44.390 20:53:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:35:44.390 20:53:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:35:44.390 20:53:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:44.390 20:53:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:44.390 20:53:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:44.390 20:53:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:44.390 20:53:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:44.390 20:53:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.390 20:53:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.390 20:53:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.390 20:53:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.390 20:53:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.390 20:53:02 -- paths/export.sh@5 -- # export PATH 00:35:44.390 20:53:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.390 20:53:02 -- nvmf/common.sh@46 -- # : 0 00:35:44.390 20:53:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:44.390 20:53:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:44.390 20:53:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:44.390 20:53:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:44.390 20:53:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:44.390 20:53:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:44.390 20:53:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:44.390 20:53:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:44.390 20:53:02 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:44.390 20:53:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:44.390 20:53:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.390 20:53:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.390 20:53:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.390 20:53:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.390 20:53:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.390 20:53:02 -- paths/export.sh@5 -- # export PATH 00:35:44.390 20:53:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.390 20:53:02 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:44.390 20:53:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:44.390 20:53:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.390 20:53:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:44.391 20:53:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:44.391 20:53:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:44.391 20:53:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.391 20:53:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:44.391 20:53:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.391 20:53:02 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:35:44.391 20:53:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:44.391 20:53:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:44.391 20:53:02 -- common/autotest_common.sh@10 -- # set +x 00:35:50.961 20:53:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:50.961 20:53:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:50.961 20:53:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:50.961 20:53:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:50.961 20:53:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:50.961 20:53:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:50.961 20:53:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:50.961 20:53:08 -- nvmf/common.sh@294 -- # net_devs=() 00:35:50.961 20:53:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:50.961 20:53:08 -- nvmf/common.sh@295 -- # e810=() 00:35:50.961 20:53:08 -- nvmf/common.sh@295 -- # local -ga e810 00:35:50.961 20:53:08 -- nvmf/common.sh@296 -- # x722=() 00:35:50.961 20:53:08 -- nvmf/common.sh@296 -- # local -ga x722 00:35:50.961 20:53:08 -- nvmf/common.sh@297 -- # mlx=() 00:35:50.961 20:53:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:50.961 20:53:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.961 20:53:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:50.961 20:53:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:50.961 20:53:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:50.961 20:53:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:35:50.961 Found 0000:27:00.0 (0x8086 - 0x159b) 00:35:50.961 20:53:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:50.961 20:53:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:35:50.961 Found 0000:27:00.1 (0x8086 - 0x159b) 00:35:50.961 20:53:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:50.961 20:53:08 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:50.961 20:53:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.961 20:53:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:50.961 20:53:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.961 20:53:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:35:50.961 Found net devices under 0000:27:00.0: cvl_0_0 00:35:50.961 20:53:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.961 20:53:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:50.961 20:53:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.961 20:53:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:50.961 20:53:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.961 20:53:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:35:50.961 Found net devices under 0000:27:00.1: cvl_0_1 00:35:50.961 20:53:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.961 20:53:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:50.961 20:53:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:50.961 20:53:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:50.961 20:53:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:50.961 20:53:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:50.961 20:53:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:50.961 20:53:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:50.961 20:53:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:50.961 20:53:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:50.961 20:53:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:50.961 20:53:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:50.961 20:53:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:50.961 20:53:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:50.961 20:53:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:50.961 20:53:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:50.961 20:53:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:50.961 20:53:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:50.961 20:53:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:50.961 20:53:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:50.961 20:53:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:50.961 20:53:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:50.961 20:53:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:50.961 20:53:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:50.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:50.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:35:50.961 00:35:50.961 --- 10.0.0.2 ping statistics --- 00:35:50.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.961 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:35:50.961 20:53:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:50.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:50.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:35:50.961 00:35:50.961 --- 10.0.0.1 ping statistics --- 00:35:50.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.961 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:35:50.961 20:53:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:50.961 20:53:08 -- nvmf/common.sh@410 -- # return 0 00:35:50.961 20:53:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:35:50.961 20:53:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:50.961 20:53:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:50.961 20:53:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:50.961 20:53:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:50.961 20:53:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:50.961 20:53:08 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:50.961 20:53:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:50.961 20:53:08 -- common/autotest_common.sh@10 -- # set +x 00:35:50.961 20:53:08 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:50.961 20:53:08 -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:50.961 20:53:08 -- common/autotest_common.sh@1509 -- # local bdfs 00:35:50.961 20:53:08 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:50.961 20:53:08 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:50.961 20:53:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:50.961 20:53:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:35:50.961 20:53:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:50.961 20:53:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:50.961 20:53:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:50.961 20:53:08 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:35:50.961 20:53:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:35:50.961 20:53:08 -- common/autotest_common.sh@1512 -- # echo 0000:c9:00.0 00:35:50.961 20:53:08 -- target/identify_passthru.sh@16 -- # bdf=0000:c9:00.0 00:35:50.961 20:53:08 -- target/identify_passthru.sh@17 -- # '[' -z 0000:c9:00.0 ']' 00:35:50.961 20:53:08 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:35:50.961 20:53:08 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:50.961 20:53:08 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:50.961 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.236 20:53:13 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9413009R2P0BGN 00:35:56.236 20:53:13 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:35:56.236 20:53:13 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:56.236 20:53:13 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:56.236 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.604 20:53:19 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:01.604 20:53:19 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:01.604 20:53:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:01.604 20:53:19 -- common/autotest_common.sh@10 -- # set +x 00:36:01.604 20:53:19 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:01.604 20:53:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:01.604 20:53:19 -- common/autotest_common.sh@10 -- # set +x 00:36:01.604 20:53:19 -- target/identify_passthru.sh@31 -- # nvmfpid=3793147 00:36:01.604 20:53:19 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:01.604 20:53:19 -- target/identify_passthru.sh@35 -- # waitforlisten 3793147 00:36:01.604 20:53:19 -- common/autotest_common.sh@819 -- # '[' -z 3793147 ']' 00:36:01.604 20:53:19 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:01.604 20:53:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.604 20:53:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:01.604 20:53:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.604 20:53:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:01.604 20:53:19 -- common/autotest_common.sh@10 -- # set +x 00:36:01.604 [2024-04-26 20:53:19.130525] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:01.604 [2024-04-26 20:53:19.130639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.604 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.604 [2024-04-26 20:53:19.252307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.604 [2024-04-26 20:53:19.352418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:01.604 [2024-04-26 20:53:19.352592] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.604 [2024-04-26 20:53:19.352605] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.604 [2024-04-26 20:53:19.352615] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.604 [2024-04-26 20:53:19.352689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.604 [2024-04-26 20:53:19.352787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.604 [2024-04-26 20:53:19.352887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.604 [2024-04-26 20:53:19.352897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.604 20:53:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:01.604 20:53:19 -- common/autotest_common.sh@852 -- # return 0 00:36:01.604 20:53:19 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:01.604 20:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:01.604 20:53:19 -- common/autotest_common.sh@10 -- # set +x 00:36:01.604 INFO: Log level set to 20 00:36:01.604 INFO: Requests: 00:36:01.604 { 00:36:01.604 "jsonrpc": "2.0", 00:36:01.604 "method": "nvmf_set_config", 00:36:01.604 "id": 1, 00:36:01.604 "params": { 00:36:01.604 "admin_cmd_passthru": { 00:36:01.604 "identify_ctrlr": true 00:36:01.604 } 00:36:01.604 } 00:36:01.604 } 00:36:01.604 00:36:01.604 INFO: response: 00:36:01.604 { 00:36:01.604 "jsonrpc": "2.0", 00:36:01.604 "id": 1, 00:36:01.604 "result": true 00:36:01.604 } 00:36:01.604 00:36:01.604 20:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:01.604 20:53:19 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:01.604 20:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:01.604 20:53:19 -- common/autotest_common.sh@10 -- # set +x 00:36:01.604 INFO: Setting log level to 20 00:36:01.604 INFO: Setting log level to 20 00:36:01.604 INFO: Log level set to 20 00:36:01.604 INFO: Log level set to 20 00:36:01.604 INFO: Requests: 00:36:01.604 { 00:36:01.604 "jsonrpc": "2.0", 00:36:01.604 "method": "framework_start_init", 00:36:01.604 "id": 1 00:36:01.604 } 00:36:01.604 00:36:01.604 INFO: Requests: 00:36:01.604 { 00:36:01.604 "jsonrpc": "2.0", 00:36:01.604 "method": "framework_start_init", 00:36:01.604 "id": 1 00:36:01.604 } 00:36:01.604 00:36:01.863 [2024-04-26 20:53:20.006630] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:01.863 INFO: response: 00:36:01.863 { 00:36:01.863 "jsonrpc": "2.0", 00:36:01.863 "id": 1, 00:36:01.863 "result": true 00:36:01.863 } 00:36:01.863 00:36:01.863 INFO: response: 00:36:01.863 { 00:36:01.863 "jsonrpc": "2.0", 00:36:01.863 "id": 1, 00:36:01.863 "result": true 00:36:01.863 } 00:36:01.863 00:36:01.863 20:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:01.863 20:53:20 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:01.863 20:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:01.863 20:53:20 -- common/autotest_common.sh@10 -- # set +x 00:36:01.863 INFO: Setting log level to 40 00:36:01.863 INFO: Setting log level to 40 00:36:01.863 INFO: Setting log level to 40 00:36:01.863 [2024-04-26 20:53:20.018142] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.863 20:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:01.863 20:53:20 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:01.864 20:53:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:01.864 20:53:20 -- common/autotest_common.sh@10 -- # set +x 00:36:01.864 20:53:20 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:c9:00.0 00:36:01.864 20:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:01.864 20:53:20 -- common/autotest_common.sh@10 -- # set +x 00:36:05.169 Nvme0n1 00:36:05.169 20:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.169 20:53:22 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:05.169 20:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:05.169 20:53:22 -- common/autotest_common.sh@10 -- # set +x 00:36:05.169 20:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.169 20:53:22 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:05.169 20:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:05.169 20:53:22 -- common/autotest_common.sh@10 -- # set +x 00:36:05.169 20:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.169 20:53:22 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:05.169 20:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:05.169 20:53:22 -- common/autotest_common.sh@10 -- # set +x 00:36:05.169 [2024-04-26 20:53:22.942965] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.169 20:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.169 20:53:22 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:05.169 20:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:05.169 20:53:22 -- common/autotest_common.sh@10 -- # set +x 00:36:05.169 [2024-04-26 20:53:22.950683] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:36:05.169 [ 00:36:05.169 { 00:36:05.169 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:05.169 "subtype": "Discovery", 00:36:05.169 "listen_addresses": [], 00:36:05.169 "allow_any_host": true, 00:36:05.169 "hosts": [] 00:36:05.169 }, 00:36:05.169 { 00:36:05.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:05.169 "subtype": "NVMe", 00:36:05.169 "listen_addresses": [ 00:36:05.169 { 00:36:05.169 "transport": "TCP", 00:36:05.169 "trtype": "TCP", 00:36:05.169 "adrfam": "IPv4", 00:36:05.169 "traddr": "10.0.0.2", 00:36:05.169 "trsvcid": "4420" 00:36:05.169 } 00:36:05.169 ], 00:36:05.169 "allow_any_host": true, 00:36:05.169 "hosts": [], 00:36:05.169 "serial_number": "SPDK00000000000001", 00:36:05.169 "model_number": "SPDK bdev Controller", 00:36:05.169 "max_namespaces": 1, 00:36:05.169 "min_cntlid": 1, 00:36:05.169 "max_cntlid": 65519, 00:36:05.169 "namespaces": [ 00:36:05.169 { 00:36:05.169 "nsid": 1, 00:36:05.169 "bdev_name": "Nvme0n1", 00:36:05.169 "name": "Nvme0n1", 00:36:05.169 "nguid": "23B5B27A726A4B6EB74A9931E3A16F6F", 00:36:05.169 "uuid": "23b5b27a-726a-4b6e-b74a-9931e3a16f6f" 00:36:05.169 } 00:36:05.169 ] 00:36:05.169 } 00:36:05.169 ] 00:36:05.169 20:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.169 20:53:22 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:05.169 20:53:22 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:05.169 20:53:22 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:05.169 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.169 20:53:23 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9413009R2P0BGN 00:36:05.169 20:53:23 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:05.169 20:53:23 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:05.169 20:53:23 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:05.169 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.169 20:53:23 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:05.169 20:53:23 -- target/identify_passthru.sh@63 -- # '[' PHLJ9413009R2P0BGN '!=' PHLJ9413009R2P0BGN ']' 00:36:05.169 20:53:23 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:05.169 20:53:23 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:05.169 20:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:05.169 20:53:23 -- common/autotest_common.sh@10 -- # set +x 00:36:05.169 20:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:05.169 20:53:23 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:05.169 20:53:23 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:05.169 20:53:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:05.169 20:53:23 -- nvmf/common.sh@116 -- # sync 00:36:05.169 20:53:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:05.169 20:53:23 -- nvmf/common.sh@119 -- # set +e 00:36:05.170 20:53:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:05.170 20:53:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:05.170 rmmod nvme_tcp 00:36:05.430 rmmod nvme_fabrics 00:36:05.430 rmmod nvme_keyring 00:36:05.430 20:53:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:05.430 20:53:23 -- nvmf/common.sh@123 -- # set -e 00:36:05.430 20:53:23 -- nvmf/common.sh@124 -- # return 0 00:36:05.430 20:53:23 -- nvmf/common.sh@477 -- # '[' -n 3793147 ']' 00:36:05.430 20:53:23 -- nvmf/common.sh@478 -- # killprocess 3793147 00:36:05.430 20:53:23 -- common/autotest_common.sh@926 -- # '[' -z 3793147 ']' 00:36:05.430 20:53:23 -- common/autotest_common.sh@930 -- # kill -0 3793147 00:36:05.430 20:53:23 -- common/autotest_common.sh@931 -- # uname 00:36:05.430 20:53:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:05.430 20:53:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3793147 00:36:05.430 20:53:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:05.430 20:53:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:05.430 20:53:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3793147' 00:36:05.430 killing process with pid 3793147 00:36:05.430 20:53:23 -- common/autotest_common.sh@945 -- # kill 3793147 00:36:05.430 [2024-04-26 20:53:23.592681] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:36:05.430 20:53:23 -- common/autotest_common.sh@950 -- # wait 3793147 00:36:08.719 20:53:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:36:08.719 20:53:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:08.719 20:53:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:08.719 20:53:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:08.719 20:53:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:08.719 20:53:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.719 20:53:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:08.719 20:53:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.099 20:53:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:10.099 00:36:10.099 real 0m25.886s 00:36:10.099 user 0m36.879s 00:36:10.099 sys 0m5.469s 00:36:10.099 20:53:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:10.099 20:53:28 -- common/autotest_common.sh@10 -- # set +x 00:36:10.099 ************************************ 00:36:10.099 END TEST nvmf_identify_passthru 00:36:10.099 ************************************ 00:36:10.099 20:53:28 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:10.099 20:53:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:10.099 20:53:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:10.099 20:53:28 -- common/autotest_common.sh@10 -- # set +x 00:36:10.099 ************************************ 00:36:10.099 START TEST nvmf_dif 00:36:10.099 ************************************ 00:36:10.099 20:53:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:10.359 * Looking for test storage... 00:36:10.359 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:36:10.359 20:53:28 -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.359 20:53:28 -- nvmf/common.sh@7 -- # uname -s 00:36:10.359 20:53:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.359 20:53:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.359 20:53:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.359 20:53:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.359 20:53:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.359 20:53:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.359 20:53:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.359 20:53:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.359 20:53:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.359 20:53:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.359 20:53:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:36:10.359 20:53:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:36:10.359 20:53:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.359 20:53:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.359 20:53:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:36:10.359 20:53:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:36:10.359 20:53:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.359 20:53:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.359 20:53:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.359 20:53:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.359 20:53:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.359 20:53:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.359 20:53:28 -- paths/export.sh@5 -- # export PATH 00:36:10.359 20:53:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.359 20:53:28 -- nvmf/common.sh@46 -- # : 0 00:36:10.359 20:53:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:36:10.359 20:53:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:36:10.359 20:53:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:36:10.359 20:53:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.359 20:53:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.359 20:53:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:36:10.359 20:53:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:36:10.359 20:53:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:36:10.359 20:53:28 -- target/dif.sh@15 -- # NULL_META=16 00:36:10.359 20:53:28 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:10.359 20:53:28 -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:10.359 20:53:28 -- target/dif.sh@15 -- # NULL_DIF=1 00:36:10.359 20:53:28 -- target/dif.sh@135 -- # nvmftestinit 00:36:10.359 20:53:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:36:10.359 20:53:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.359 20:53:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:36:10.359 20:53:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:36:10.359 20:53:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:36:10.359 20:53:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.359 20:53:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:10.359 20:53:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.359 20:53:28 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:36:10.359 20:53:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:36:10.359 20:53:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:36:10.359 20:53:28 -- common/autotest_common.sh@10 -- # set +x 00:36:15.642 20:53:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:36:15.642 20:53:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:36:15.642 20:53:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:36:15.642 20:53:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:36:15.642 20:53:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:36:15.642 20:53:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:36:15.642 20:53:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:36:15.642 20:53:33 -- nvmf/common.sh@294 -- # net_devs=() 00:36:15.642 20:53:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:36:15.642 20:53:33 -- nvmf/common.sh@295 -- # e810=() 00:36:15.642 20:53:33 -- nvmf/common.sh@295 -- # local -ga e810 00:36:15.642 20:53:33 -- nvmf/common.sh@296 -- # x722=() 00:36:15.642 20:53:33 -- nvmf/common.sh@296 -- # local -ga x722 00:36:15.642 20:53:33 -- nvmf/common.sh@297 -- # mlx=() 00:36:15.642 20:53:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:36:15.642 20:53:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.642 20:53:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:36:15.642 20:53:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:36:15.642 20:53:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:36:15.642 20:53:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:36:15.642 Found 0000:27:00.0 (0x8086 - 0x159b) 00:36:15.642 20:53:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:36:15.642 20:53:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:36:15.642 Found 0000:27:00.1 (0x8086 - 0x159b) 00:36:15.642 20:53:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:36:15.642 20:53:33 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:36:15.642 20:53:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.642 20:53:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:36:15.642 20:53:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.642 20:53:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:36:15.642 Found net devices under 0000:27:00.0: cvl_0_0 00:36:15.642 20:53:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.642 20:53:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:36:15.642 20:53:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.642 20:53:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:36:15.642 20:53:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.642 20:53:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:36:15.642 Found net devices under 0000:27:00.1: cvl_0_1 00:36:15.642 20:53:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.642 20:53:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:36:15.642 20:53:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:36:15.642 20:53:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:36:15.642 20:53:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:36:15.642 20:53:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.642 20:53:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.642 20:53:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:15.642 20:53:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:36:15.642 20:53:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:15.642 20:53:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:15.642 20:53:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:36:15.642 20:53:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:15.642 20:53:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.642 20:53:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:36:15.642 20:53:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:36:15.642 20:53:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:36:15.642 20:53:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.642 20:53:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.642 20:53:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.902 20:53:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:36:15.902 20:53:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.902 20:53:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.902 20:53:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.902 20:53:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:36:15.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:36:15.902 00:36:15.902 --- 10.0.0.2 ping statistics --- 00:36:15.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.902 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:36:15.902 20:53:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:36:15.902 00:36:15.902 --- 10.0.0.1 ping statistics --- 00:36:15.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.902 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:36:15.902 20:53:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.902 20:53:34 -- nvmf/common.sh@410 -- # return 0 00:36:15.902 20:53:34 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:36:15.902 20:53:34 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:36:18.434 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:36:18.435 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:18.435 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:36:18.435 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:36:18.435 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:36:18.435 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:36:18.435 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:36:18.435 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:36:18.435 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:36:18.435 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:36:18.435 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:36:18.435 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:36:18.435 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:36:18.435 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:36:18.435 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:18.435 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:36:18.435 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:36:18.435 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:36:18.695 20:53:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:18.695 20:53:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:36:18.695 20:53:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:36:18.695 20:53:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:18.695 20:53:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:36:18.695 20:53:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:36:18.695 20:53:36 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:18.695 20:53:36 -- target/dif.sh@137 -- # nvmfappstart 00:36:18.695 20:53:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:36:18.695 20:53:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:18.695 20:53:36 -- common/autotest_common.sh@10 -- # set +x 00:36:18.695 20:53:36 -- nvmf/common.sh@469 -- # nvmfpid=3799779 00:36:18.695 20:53:36 -- nvmf/common.sh@470 -- # waitforlisten 3799779 00:36:18.695 20:53:36 -- common/autotest_common.sh@819 -- # '[' -z 3799779 ']' 00:36:18.695 20:53:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.695 20:53:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:18.695 20:53:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.695 20:53:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:18.695 20:53:36 -- common/autotest_common.sh@10 -- # set +x 00:36:18.695 20:53:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:18.695 [2024-04-26 20:53:36.910218] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:18.695 [2024-04-26 20:53:36.910326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.695 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.695 [2024-04-26 20:53:37.032674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.955 [2024-04-26 20:53:37.129627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:18.955 [2024-04-26 20:53:37.129804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.955 [2024-04-26 20:53:37.129819] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.955 [2024-04-26 20:53:37.129829] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.955 [2024-04-26 20:53:37.129863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.521 20:53:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:19.521 20:53:37 -- common/autotest_common.sh@852 -- # return 0 00:36:19.521 20:53:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:36:19.521 20:53:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:19.521 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:36:19.521 20:53:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.521 20:53:37 -- target/dif.sh@139 -- # create_transport 00:36:19.521 20:53:37 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:19.521 20:53:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:19.521 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:36:19.521 [2024-04-26 20:53:37.629420] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.521 20:53:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:19.521 20:53:37 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:19.521 20:53:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:19.521 20:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:19.521 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:36:19.521 ************************************ 00:36:19.521 START TEST fio_dif_1_default 00:36:19.521 ************************************ 00:36:19.521 20:53:37 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:36:19.521 20:53:37 -- target/dif.sh@86 -- # create_subsystems 0 00:36:19.521 20:53:37 -- target/dif.sh@28 -- # local sub 00:36:19.521 20:53:37 -- target/dif.sh@30 -- # for sub in "$@" 00:36:19.521 20:53:37 -- target/dif.sh@31 -- # create_subsystem 0 00:36:19.521 20:53:37 -- target/dif.sh@18 -- # local sub_id=0 00:36:19.522 20:53:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:19.522 20:53:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:19.522 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:36:19.522 bdev_null0 00:36:19.522 20:53:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:19.522 20:53:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:19.522 20:53:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:19.522 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:36:19.522 20:53:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:19.522 20:53:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:19.522 20:53:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:19.522 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:36:19.522 20:53:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:19.522 20:53:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:19.522 20:53:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:19.522 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:36:19.522 [2024-04-26 20:53:37.665567] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.522 20:53:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:19.522 20:53:37 -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:19.522 20:53:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.522 20:53:37 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.522 20:53:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:19.522 20:53:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:19.522 20:53:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:19.522 20:53:37 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.522 20:53:37 -- common/autotest_common.sh@1320 -- # shift 00:36:19.522 20:53:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:19.522 20:53:37 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:19.522 20:53:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:19.522 20:53:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:19.522 20:53:37 -- target/dif.sh@82 -- # gen_fio_conf 00:36:19.522 20:53:37 -- nvmf/common.sh@520 -- # config=() 00:36:19.522 20:53:37 -- nvmf/common.sh@520 -- # local subsystem config 00:36:19.522 20:53:37 -- target/dif.sh@54 -- # local file 00:36:19.522 20:53:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:19.522 20:53:37 -- target/dif.sh@56 -- # cat 00:36:19.522 20:53:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:19.522 { 00:36:19.522 "params": { 00:36:19.522 "name": "Nvme$subsystem", 00:36:19.522 "trtype": "$TEST_TRANSPORT", 00:36:19.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:19.522 "adrfam": "ipv4", 00:36:19.522 "trsvcid": "$NVMF_PORT", 00:36:19.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:19.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:19.522 "hdgst": ${hdgst:-false}, 00:36:19.522 "ddgst": ${ddgst:-false} 00:36:19.522 }, 00:36:19.522 "method": "bdev_nvme_attach_controller" 00:36:19.522 } 00:36:19.522 EOF 00:36:19.522 )") 00:36:19.522 20:53:37 -- nvmf/common.sh@542 -- # cat 00:36:19.522 20:53:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.522 20:53:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:19.522 20:53:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:19.522 20:53:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:19.522 20:53:37 -- target/dif.sh@72 -- # (( file <= files )) 00:36:19.522 20:53:37 -- nvmf/common.sh@544 -- # jq . 00:36:19.522 20:53:37 -- nvmf/common.sh@545 -- # IFS=, 00:36:19.522 20:53:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:19.522 "params": { 00:36:19.522 "name": "Nvme0", 00:36:19.522 "trtype": "tcp", 00:36:19.522 "traddr": "10.0.0.2", 00:36:19.522 "adrfam": "ipv4", 00:36:19.522 "trsvcid": "4420", 00:36:19.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:19.522 "hdgst": false, 00:36:19.522 "ddgst": false 00:36:19.522 }, 00:36:19.522 "method": "bdev_nvme_attach_controller" 00:36:19.522 }' 00:36:19.522 20:53:37 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:19.522 20:53:37 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:19.522 20:53:37 -- common/autotest_common.sh@1326 -- # break 00:36:19.522 20:53:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:19.522 20:53:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.089 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:20.089 fio-3.35 00:36:20.089 Starting 1 thread 00:36:20.089 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.657 [2024-04-26 20:53:38.716024] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:20.657 [2024-04-26 20:53:38.716100] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:30.634 00:36:30.634 filename0: (groupid=0, jobs=1): err= 0: pid=3800259: Fri Apr 26 20:53:48 2024 00:36:30.634 read: IOPS=185, BW=742KiB/s (759kB/s)(7424KiB/10010msec) 00:36:30.634 slat (nsec): min=5980, max=33702, avg=7002.55, stdev=1842.00 00:36:30.634 clat (usec): min=565, max=42990, avg=21554.01, stdev=20414.37 00:36:30.634 lat (usec): min=571, max=43023, avg=21561.01, stdev=20414.10 00:36:30.634 clat percentiles (usec): 00:36:30.634 | 1.00th=[ 676], 5.00th=[ 930], 10.00th=[ 971], 20.00th=[ 979], 00:36:30.634 | 30.00th=[ 988], 40.00th=[ 996], 50.00th=[41157], 60.00th=[41681], 00:36:30.634 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:30.634 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:30.634 | 99.99th=[42730] 00:36:30.634 bw ( KiB/s): min= 672, max= 768, per=99.78%, avg=740.80, stdev=34.86, samples=20 00:36:30.634 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:36:30.634 lat (usec) : 750=1.89%, 1000=39.17% 00:36:30.634 lat (msec) : 2=8.51%, 50=50.43% 00:36:30.634 cpu : usr=95.94%, sys=3.77%, ctx=20, majf=0, minf=1635 00:36:30.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.634 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:30.634 00:36:30.634 Run status group 0 (all jobs): 00:36:30.634 READ: bw=742KiB/s (759kB/s), 742KiB/s-742KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10010-10010msec 00:36:31.201 ----------------------------------------------------- 00:36:31.201 Suppressions used: 00:36:31.201 count bytes template 00:36:31.201 1 8 /usr/src/fio/parse.c 00:36:31.201 1 8 libtcmalloc_minimal.so 00:36:31.201 1 904 libcrypto.so 00:36:31.201 ----------------------------------------------------- 00:36:31.201 00:36:31.201 20:53:49 -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:31.201 20:53:49 -- target/dif.sh@43 -- # local sub 00:36:31.201 20:53:49 -- target/dif.sh@45 -- # for sub in "$@" 00:36:31.201 20:53:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:31.201 20:53:49 -- target/dif.sh@36 -- # local sub_id=0 00:36:31.201 20:53:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 00:36:31.201 real 0m11.803s 00:36:31.201 user 0m30.034s 00:36:31.201 sys 0m0.806s 00:36:31.201 20:53:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 ************************************ 00:36:31.201 END TEST fio_dif_1_default 00:36:31.201 ************************************ 00:36:31.201 20:53:49 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:31.201 20:53:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:31.201 20:53:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 ************************************ 00:36:31.201 START TEST fio_dif_1_multi_subsystems 00:36:31.201 ************************************ 00:36:31.201 20:53:49 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:36:31.201 20:53:49 -- target/dif.sh@92 -- # local files=1 00:36:31.201 20:53:49 -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:31.201 20:53:49 -- target/dif.sh@28 -- # local sub 00:36:31.201 20:53:49 -- target/dif.sh@30 -- # for sub in "$@" 00:36:31.201 20:53:49 -- target/dif.sh@31 -- # create_subsystem 0 00:36:31.201 20:53:49 -- target/dif.sh@18 -- # local sub_id=0 00:36:31.201 20:53:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 bdev_null0 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 [2024-04-26 20:53:49.505532] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@30 -- # for sub in "$@" 00:36:31.201 20:53:49 -- target/dif.sh@31 -- # create_subsystem 1 00:36:31.201 20:53:49 -- target/dif.sh@18 -- # local sub_id=1 00:36:31.201 20:53:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 bdev_null1 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:31.201 20:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:31.201 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:36:31.201 20:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:31.201 20:53:49 -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:31.460 20:53:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:31.460 20:53:49 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:31.460 20:53:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:31.460 20:53:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:31.460 20:53:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:31.460 20:53:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:31.460 20:53:49 -- common/autotest_common.sh@1320 -- # shift 00:36:31.460 20:53:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:31.460 20:53:49 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:31.460 20:53:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:31.460 20:53:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:31.460 20:53:49 -- nvmf/common.sh@520 -- # config=() 00:36:31.460 20:53:49 -- target/dif.sh@82 -- # gen_fio_conf 00:36:31.460 20:53:49 -- nvmf/common.sh@520 -- # local subsystem config 00:36:31.460 20:53:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:31.460 20:53:49 -- target/dif.sh@54 -- # local file 00:36:31.460 20:53:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:31.460 { 00:36:31.460 "params": { 00:36:31.460 "name": "Nvme$subsystem", 00:36:31.460 "trtype": "$TEST_TRANSPORT", 00:36:31.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:31.460 "adrfam": "ipv4", 00:36:31.460 "trsvcid": "$NVMF_PORT", 00:36:31.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:31.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:31.460 "hdgst": ${hdgst:-false}, 00:36:31.460 "ddgst": ${ddgst:-false} 00:36:31.460 }, 00:36:31.460 "method": "bdev_nvme_attach_controller" 00:36:31.460 } 00:36:31.460 EOF 00:36:31.460 )") 00:36:31.460 20:53:49 -- target/dif.sh@56 -- # cat 00:36:31.460 20:53:49 -- nvmf/common.sh@542 -- # cat 00:36:31.460 20:53:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:31.460 20:53:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:31.460 20:53:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:31.460 20:53:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:31.460 20:53:49 -- target/dif.sh@72 -- # (( file <= files )) 00:36:31.460 20:53:49 -- target/dif.sh@73 -- # cat 00:36:31.460 20:53:49 -- target/dif.sh@72 -- # (( file++ )) 00:36:31.460 20:53:49 -- target/dif.sh@72 -- # (( file <= files )) 00:36:31.460 20:53:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:31.460 20:53:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:31.460 { 00:36:31.460 "params": { 00:36:31.460 "name": "Nvme$subsystem", 00:36:31.460 "trtype": "$TEST_TRANSPORT", 00:36:31.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:31.460 "adrfam": "ipv4", 00:36:31.460 "trsvcid": "$NVMF_PORT", 00:36:31.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:31.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:31.460 "hdgst": ${hdgst:-false}, 00:36:31.460 "ddgst": ${ddgst:-false} 00:36:31.460 }, 00:36:31.460 "method": "bdev_nvme_attach_controller" 00:36:31.460 } 00:36:31.460 EOF 00:36:31.460 )") 00:36:31.460 20:53:49 -- nvmf/common.sh@542 -- # cat 00:36:31.460 20:53:49 -- nvmf/common.sh@544 -- # jq . 00:36:31.460 20:53:49 -- nvmf/common.sh@545 -- # IFS=, 00:36:31.460 20:53:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:31.460 "params": { 00:36:31.460 "name": "Nvme0", 00:36:31.460 "trtype": "tcp", 00:36:31.460 "traddr": "10.0.0.2", 00:36:31.460 "adrfam": "ipv4", 00:36:31.460 "trsvcid": "4420", 00:36:31.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:31.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:31.460 "hdgst": false, 00:36:31.460 "ddgst": false 00:36:31.460 }, 00:36:31.460 "method": "bdev_nvme_attach_controller" 00:36:31.460 },{ 00:36:31.460 "params": { 00:36:31.460 "name": "Nvme1", 00:36:31.460 "trtype": "tcp", 00:36:31.460 "traddr": "10.0.0.2", 00:36:31.460 "adrfam": "ipv4", 00:36:31.460 "trsvcid": "4420", 00:36:31.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:31.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:31.460 "hdgst": false, 00:36:31.460 "ddgst": false 00:36:31.460 }, 00:36:31.460 "method": "bdev_nvme_attach_controller" 00:36:31.460 }' 00:36:31.460 20:53:49 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:31.460 20:53:49 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:31.460 20:53:49 -- common/autotest_common.sh@1326 -- # break 00:36:31.460 20:53:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:31.460 20:53:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:31.719 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:31.719 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:31.719 fio-3.35 00:36:31.719 Starting 2 threads 00:36:31.719 EAL: No free 2048 kB hugepages reported on node 1 00:36:32.657 [2024-04-26 20:53:50.791742] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:32.657 [2024-04-26 20:53:50.791817] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:42.628 00:36:42.628 filename0: (groupid=0, jobs=1): err= 0: pid=3802818: Fri Apr 26 20:54:00 2024 00:36:42.628 read: IOPS=186, BW=746KiB/s (763kB/s)(7456KiB/10001msec) 00:36:42.628 slat (nsec): min=3075, max=23231, avg=6736.34, stdev=1307.34 00:36:42.628 clat (usec): min=919, max=42964, avg=21442.34, stdev=20455.39 00:36:42.628 lat (usec): min=927, max=42981, avg=21449.08, stdev=20455.02 00:36:42.628 clat percentiles (usec): 00:36:42.628 | 1.00th=[ 963], 5.00th=[ 971], 10.00th=[ 971], 20.00th=[ 979], 00:36:42.628 | 30.00th=[ 988], 40.00th=[ 996], 50.00th=[ 1434], 60.00th=[41681], 00:36:42.628 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:42.628 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:42.628 | 99.99th=[42730] 00:36:42.628 bw ( KiB/s): min= 704, max= 768, per=50.05%, avg=744.42, stdev=31.72, samples=19 00:36:42.628 iops : min= 176, max= 192, avg=186.11, stdev= 7.93, samples=19 00:36:42.628 lat (usec) : 1000=42.76% 00:36:42.628 lat (msec) : 2=7.24%, 50=50.00% 00:36:42.629 cpu : usr=98.72%, sys=1.00%, ctx=19, majf=0, minf=1638 00:36:42.629 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:42.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.629 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.629 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:42.629 filename1: (groupid=0, jobs=1): err= 0: pid=3802819: Fri Apr 26 20:54:00 2024 00:36:42.629 read: IOPS=185, BW=742KiB/s (759kB/s)(7424KiB/10010msec) 00:36:42.629 slat (nsec): min=2881, max=19629, avg=6732.08, stdev=1187.16 00:36:42.629 clat (usec): min=926, max=43465, avg=21553.46, stdev=20451.04 00:36:42.629 lat (usec): min=933, max=43484, avg=21560.20, stdev=20450.70 00:36:42.629 clat percentiles (usec): 00:36:42.629 | 1.00th=[ 955], 5.00th=[ 963], 10.00th=[ 971], 20.00th=[ 979], 00:36:42.629 | 30.00th=[ 988], 40.00th=[ 996], 50.00th=[41157], 60.00th=[41681], 00:36:42.629 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:42.629 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:42.629 | 99.99th=[43254] 00:36:42.629 bw ( KiB/s): min= 672, max= 768, per=49.78%, avg=740.80, stdev=34.86, samples=20 00:36:42.629 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:36:42.629 lat (usec) : 1000=40.46% 00:36:42.629 lat (msec) : 2=9.32%, 50=50.22% 00:36:42.629 cpu : usr=98.43%, sys=1.29%, ctx=13, majf=0, minf=1632 00:36:42.629 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:42.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.629 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.629 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:42.629 00:36:42.629 Run status group 0 (all jobs): 00:36:42.629 READ: bw=1487KiB/s (1522kB/s), 742KiB/s-746KiB/s (759kB/s-763kB/s), io=14.5MiB (15.2MB), run=10001-10010msec 00:36:43.566 ----------------------------------------------------- 00:36:43.566 Suppressions used: 00:36:43.566 count bytes template 00:36:43.566 2 16 /usr/src/fio/parse.c 00:36:43.566 1 8 libtcmalloc_minimal.so 00:36:43.566 1 904 libcrypto.so 00:36:43.566 ----------------------------------------------------- 00:36:43.566 00:36:43.566 20:54:01 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:43.566 20:54:01 -- target/dif.sh@43 -- # local sub 00:36:43.566 20:54:01 -- target/dif.sh@45 -- # for sub in "$@" 00:36:43.566 20:54:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:43.566 20:54:01 -- target/dif.sh@36 -- # local sub_id=0 00:36:43.566 20:54:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:43.566 20:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:43.566 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.566 20:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:43.566 20:54:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:43.566 20:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:43.566 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.566 20:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:43.566 20:54:01 -- target/dif.sh@45 -- # for sub in "$@" 00:36:43.566 20:54:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:43.566 20:54:01 -- target/dif.sh@36 -- # local sub_id=1 00:36:43.566 20:54:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:43.566 20:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:43.566 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.566 20:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:43.566 20:54:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:43.566 20:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:43.566 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.566 20:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:43.566 00:36:43.566 real 0m12.163s 00:36:43.566 user 0m36.559s 00:36:43.566 sys 0m0.682s 00:36:43.566 20:54:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:43.566 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.566 ************************************ 00:36:43.566 END TEST fio_dif_1_multi_subsystems 00:36:43.566 ************************************ 00:36:43.566 20:54:01 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:43.566 20:54:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:43.566 20:54:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:43.566 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.566 ************************************ 00:36:43.566 START TEST fio_dif_rand_params 00:36:43.566 ************************************ 00:36:43.566 20:54:01 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:36:43.566 20:54:01 -- target/dif.sh@100 -- # local NULL_DIF 00:36:43.566 20:54:01 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:43.566 20:54:01 -- target/dif.sh@103 -- # NULL_DIF=3 00:36:43.566 20:54:01 -- target/dif.sh@103 -- # bs=128k 00:36:43.566 20:54:01 -- target/dif.sh@103 -- # numjobs=3 00:36:43.566 20:54:01 -- target/dif.sh@103 -- # iodepth=3 00:36:43.566 20:54:01 -- target/dif.sh@103 -- # runtime=5 00:36:43.566 20:54:01 -- target/dif.sh@105 -- # create_subsystems 0 00:36:43.566 20:54:01 -- target/dif.sh@28 -- # local sub 00:36:43.566 20:54:01 -- target/dif.sh@30 -- # for sub in "$@" 00:36:43.566 20:54:01 -- target/dif.sh@31 -- # create_subsystem 0 00:36:43.566 20:54:01 -- target/dif.sh@18 -- # local sub_id=0 00:36:43.566 20:54:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:43.566 20:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:43.566 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.566 bdev_null0 00:36:43.566 20:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:43.566 20:54:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:43.566 20:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:43.566 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.566 20:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:43.566 20:54:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:43.567 20:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:43.567 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.567 20:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:43.567 20:54:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:43.567 20:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:43.567 20:54:01 -- common/autotest_common.sh@10 -- # set +x 00:36:43.567 [2024-04-26 20:54:01.707792] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.567 20:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:43.567 20:54:01 -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:43.567 20:54:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:43.567 20:54:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:43.567 20:54:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:43.567 20:54:01 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:43.567 20:54:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:43.567 20:54:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:43.567 20:54:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:43.567 20:54:01 -- common/autotest_common.sh@1320 -- # shift 00:36:43.567 20:54:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:43.567 20:54:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:43.567 20:54:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:43.567 20:54:01 -- nvmf/common.sh@520 -- # config=() 00:36:43.567 20:54:01 -- target/dif.sh@82 -- # gen_fio_conf 00:36:43.567 20:54:01 -- nvmf/common.sh@520 -- # local subsystem config 00:36:43.567 20:54:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:43.567 20:54:01 -- target/dif.sh@54 -- # local file 00:36:43.567 20:54:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:43.567 { 00:36:43.567 "params": { 00:36:43.567 "name": "Nvme$subsystem", 00:36:43.567 "trtype": "$TEST_TRANSPORT", 00:36:43.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:43.567 "adrfam": "ipv4", 00:36:43.567 "trsvcid": "$NVMF_PORT", 00:36:43.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:43.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:43.567 "hdgst": ${hdgst:-false}, 00:36:43.567 "ddgst": ${ddgst:-false} 00:36:43.567 }, 00:36:43.567 "method": "bdev_nvme_attach_controller" 00:36:43.567 } 00:36:43.567 EOF 00:36:43.567 )") 00:36:43.567 20:54:01 -- target/dif.sh@56 -- # cat 00:36:43.567 20:54:01 -- nvmf/common.sh@542 -- # cat 00:36:43.567 20:54:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:43.567 20:54:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:43.567 20:54:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:43.567 20:54:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:43.567 20:54:01 -- target/dif.sh@72 -- # (( file <= files )) 00:36:43.567 20:54:01 -- nvmf/common.sh@544 -- # jq . 00:36:43.567 20:54:01 -- nvmf/common.sh@545 -- # IFS=, 00:36:43.567 20:54:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:43.567 "params": { 00:36:43.567 "name": "Nvme0", 00:36:43.567 "trtype": "tcp", 00:36:43.567 "traddr": "10.0.0.2", 00:36:43.567 "adrfam": "ipv4", 00:36:43.567 "trsvcid": "4420", 00:36:43.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:43.567 "hdgst": false, 00:36:43.567 "ddgst": false 00:36:43.567 }, 00:36:43.567 "method": "bdev_nvme_attach_controller" 00:36:43.567 }' 00:36:43.567 20:54:01 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:43.567 20:54:01 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:43.567 20:54:01 -- common/autotest_common.sh@1326 -- # break 00:36:43.567 20:54:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:43.567 20:54:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:43.827 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:43.827 ... 00:36:43.827 fio-3.35 00:36:43.827 Starting 3 threads 00:36:44.086 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.652 [2024-04-26 20:54:02.886912] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:44.652 [2024-04-26 20:54:02.886994] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:49.921 00:36:49.921 filename0: (groupid=0, jobs=1): err= 0: pid=3805501: Fri Apr 26 20:54:08 2024 00:36:49.921 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(176MiB/5045msec) 00:36:49.921 slat (nsec): min=5980, max=27583, avg=7564.45, stdev=1959.85 00:36:49.921 clat (usec): min=3699, max=57602, avg=10692.64, stdev=12181.75 00:36:49.921 lat (usec): min=3706, max=57630, avg=10700.20, stdev=12181.97 00:36:49.921 clat percentiles (usec): 00:36:49.921 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4817], 20.00th=[ 5407], 00:36:49.921 | 30.00th=[ 6063], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 7308], 00:36:49.921 | 70.00th=[ 8029], 80.00th=[ 9241], 90.00th=[11338], 95.00th=[48497], 00:36:49.921 | 99.00th=[50594], 99.50th=[51119], 99.90th=[57410], 99.95th=[57410], 00:36:49.921 | 99.99th=[57410] 00:36:49.921 bw ( KiB/s): min=22272, max=52480, per=36.12%, avg=35916.80, stdev=9200.22, samples=10 00:36:49.921 iops : min= 174, max= 410, avg=280.60, stdev=71.88, samples=10 00:36:49.921 lat (msec) : 4=0.14%, 10=84.21%, 20=6.61%, 50=7.18%, 100=1.85% 00:36:49.921 cpu : usr=96.83%, sys=2.85%, ctx=7, majf=0, minf=1637 00:36:49.921 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.921 issued rwts: total=1406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:49.921 filename0: (groupid=0, jobs=1): err= 0: pid=3805502: Fri Apr 26 20:54:08 2024 00:36:49.921 read: IOPS=244, BW=30.5MiB/s (32.0MB/s)(153MiB/5002msec) 00:36:49.921 slat (nsec): min=6015, max=25139, avg=8020.26, stdev=2175.56 00:36:49.921 clat (usec): min=3795, max=90779, avg=12269.50, stdev=13989.47 00:36:49.921 lat (usec): min=3801, max=90786, avg=12277.52, stdev=13989.60 00:36:49.921 clat percentiles (usec): 00:36:49.921 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 5669], 00:36:49.921 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7242], 60.00th=[ 7832], 00:36:49.921 | 70.00th=[ 8717], 80.00th=[10159], 90.00th=[47449], 95.00th=[50070], 00:36:49.921 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[90702], 00:36:49.921 | 99.99th=[90702] 00:36:49.921 bw ( KiB/s): min=21504, max=43008, per=31.39%, avg=31213.10, stdev=6761.42, samples=10 00:36:49.921 iops : min= 168, max= 336, avg=243.80, stdev=52.80, samples=10 00:36:49.921 lat (msec) : 4=0.25%, 10=78.81%, 20=9.00%, 50=7.20%, 100=4.75% 00:36:49.921 cpu : usr=96.86%, sys=2.80%, ctx=7, majf=0, minf=1636 00:36:49.921 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.921 issued rwts: total=1222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:49.921 filename0: (groupid=0, jobs=1): err= 0: pid=3805503: Fri Apr 26 20:54:08 2024 00:36:49.921 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(161MiB/5045msec) 00:36:49.921 slat (nsec): min=6007, max=26925, avg=8088.77, stdev=2353.32 00:36:49.921 clat (usec): min=3639, max=59841, avg=11678.81, stdev=13698.43 00:36:49.921 lat (usec): min=3646, max=59868, avg=11686.90, stdev=13698.52 00:36:49.921 clat percentiles (usec): 00:36:49.921 | 1.00th=[ 3949], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[ 5211], 00:36:49.921 | 30.00th=[ 5866], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 7373], 00:36:49.922 | 70.00th=[ 8225], 80.00th=[ 9765], 90.00th=[47449], 95.00th=[49546], 00:36:49.922 | 99.00th=[52691], 99.50th=[54264], 99.90th=[60031], 99.95th=[60031], 00:36:49.922 | 99.99th=[60031] 00:36:49.922 bw ( KiB/s): min=16896, max=49664, per=33.16%, avg=32972.80, stdev=10834.17, samples=10 00:36:49.922 iops : min= 132, max= 388, avg=257.60, stdev=84.64, samples=10 00:36:49.922 lat (msec) : 4=1.24%, 10=79.40%, 20=8.06%, 50=7.13%, 100=4.18% 00:36:49.922 cpu : usr=96.99%, sys=2.70%, ctx=7, majf=0, minf=1635 00:36:49.922 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.922 issued rwts: total=1291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:49.922 00:36:49.922 Run status group 0 (all jobs): 00:36:49.922 READ: bw=97.1MiB/s (102MB/s), 30.5MiB/s-34.8MiB/s (32.0MB/s-36.5MB/s), io=490MiB (514MB), run=5002-5045msec 00:36:50.491 ----------------------------------------------------- 00:36:50.491 Suppressions used: 00:36:50.491 count bytes template 00:36:50.491 5 44 /usr/src/fio/parse.c 00:36:50.491 1 8 libtcmalloc_minimal.so 00:36:50.491 1 904 libcrypto.so 00:36:50.491 ----------------------------------------------------- 00:36:50.491 00:36:50.491 20:54:08 -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:50.491 20:54:08 -- target/dif.sh@43 -- # local sub 00:36:50.491 20:54:08 -- target/dif.sh@45 -- # for sub in "$@" 00:36:50.491 20:54:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:50.491 20:54:08 -- target/dif.sh@36 -- # local sub_id=0 00:36:50.491 20:54:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:50.491 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.491 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.491 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@109 -- # NULL_DIF=2 00:36:50.492 20:54:08 -- target/dif.sh@109 -- # bs=4k 00:36:50.492 20:54:08 -- target/dif.sh@109 -- # numjobs=8 00:36:50.492 20:54:08 -- target/dif.sh@109 -- # iodepth=16 00:36:50.492 20:54:08 -- target/dif.sh@109 -- # runtime= 00:36:50.492 20:54:08 -- target/dif.sh@109 -- # files=2 00:36:50.492 20:54:08 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:50.492 20:54:08 -- target/dif.sh@28 -- # local sub 00:36:50.492 20:54:08 -- target/dif.sh@30 -- # for sub in "$@" 00:36:50.492 20:54:08 -- target/dif.sh@31 -- # create_subsystem 0 00:36:50.492 20:54:08 -- target/dif.sh@18 -- # local sub_id=0 00:36:50.492 20:54:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 bdev_null0 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 [2024-04-26 20:54:08.727620] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@30 -- # for sub in "$@" 00:36:50.492 20:54:08 -- target/dif.sh@31 -- # create_subsystem 1 00:36:50.492 20:54:08 -- target/dif.sh@18 -- # local sub_id=1 00:36:50.492 20:54:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 bdev_null1 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@30 -- # for sub in "$@" 00:36:50.492 20:54:08 -- target/dif.sh@31 -- # create_subsystem 2 00:36:50.492 20:54:08 -- target/dif.sh@18 -- # local sub_id=2 00:36:50.492 20:54:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 bdev_null2 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:50.492 20:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:50.492 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:36:50.492 20:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:50.492 20:54:08 -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:50.492 20:54:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:50.492 20:54:08 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:50.492 20:54:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:50.492 20:54:08 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:50.492 20:54:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:50.492 20:54:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:50.492 20:54:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:50.492 20:54:08 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:50.492 20:54:08 -- nvmf/common.sh@520 -- # config=() 00:36:50.492 20:54:08 -- common/autotest_common.sh@1320 -- # shift 00:36:50.492 20:54:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:50.492 20:54:08 -- nvmf/common.sh@520 -- # local subsystem config 00:36:50.492 20:54:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:50.492 20:54:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:50.492 20:54:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:50.492 { 00:36:50.492 "params": { 00:36:50.492 "name": "Nvme$subsystem", 00:36:50.492 "trtype": "$TEST_TRANSPORT", 00:36:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:50.492 "adrfam": "ipv4", 00:36:50.492 "trsvcid": "$NVMF_PORT", 00:36:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:50.492 "hdgst": ${hdgst:-false}, 00:36:50.492 "ddgst": ${ddgst:-false} 00:36:50.492 }, 00:36:50.492 "method": "bdev_nvme_attach_controller" 00:36:50.492 } 00:36:50.492 EOF 00:36:50.492 )") 00:36:50.492 20:54:08 -- target/dif.sh@82 -- # gen_fio_conf 00:36:50.492 20:54:08 -- target/dif.sh@54 -- # local file 00:36:50.492 20:54:08 -- target/dif.sh@56 -- # cat 00:36:50.492 20:54:08 -- nvmf/common.sh@542 -- # cat 00:36:50.492 20:54:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:50.492 20:54:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:50.492 20:54:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:50.492 20:54:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:50.492 20:54:08 -- target/dif.sh@72 -- # (( file <= files )) 00:36:50.492 20:54:08 -- target/dif.sh@73 -- # cat 00:36:50.492 20:54:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:50.492 20:54:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:50.492 { 00:36:50.492 "params": { 00:36:50.492 "name": "Nvme$subsystem", 00:36:50.492 "trtype": "$TEST_TRANSPORT", 00:36:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:50.492 "adrfam": "ipv4", 00:36:50.492 "trsvcid": "$NVMF_PORT", 00:36:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:50.492 "hdgst": ${hdgst:-false}, 00:36:50.492 "ddgst": ${ddgst:-false} 00:36:50.492 }, 00:36:50.492 "method": "bdev_nvme_attach_controller" 00:36:50.492 } 00:36:50.492 EOF 00:36:50.492 )") 00:36:50.492 20:54:08 -- nvmf/common.sh@542 -- # cat 00:36:50.492 20:54:08 -- target/dif.sh@72 -- # (( file++ )) 00:36:50.492 20:54:08 -- target/dif.sh@72 -- # (( file <= files )) 00:36:50.492 20:54:08 -- target/dif.sh@73 -- # cat 00:36:50.492 20:54:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:50.492 20:54:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:50.492 { 00:36:50.492 "params": { 00:36:50.492 "name": "Nvme$subsystem", 00:36:50.492 "trtype": "$TEST_TRANSPORT", 00:36:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:50.492 "adrfam": "ipv4", 00:36:50.492 "trsvcid": "$NVMF_PORT", 00:36:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:50.492 "hdgst": ${hdgst:-false}, 00:36:50.492 "ddgst": ${ddgst:-false} 00:36:50.492 }, 00:36:50.492 "method": "bdev_nvme_attach_controller" 00:36:50.492 } 00:36:50.492 EOF 00:36:50.492 )") 00:36:50.492 20:54:08 -- nvmf/common.sh@542 -- # cat 00:36:50.492 20:54:08 -- target/dif.sh@72 -- # (( file++ )) 00:36:50.492 20:54:08 -- target/dif.sh@72 -- # (( file <= files )) 00:36:50.492 20:54:08 -- nvmf/common.sh@544 -- # jq . 00:36:50.492 20:54:08 -- nvmf/common.sh@545 -- # IFS=, 00:36:50.492 20:54:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:50.492 "params": { 00:36:50.492 "name": "Nvme0", 00:36:50.492 "trtype": "tcp", 00:36:50.492 "traddr": "10.0.0.2", 00:36:50.492 "adrfam": "ipv4", 00:36:50.492 "trsvcid": "4420", 00:36:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.492 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.492 "hdgst": false, 00:36:50.492 "ddgst": false 00:36:50.492 }, 00:36:50.492 "method": "bdev_nvme_attach_controller" 00:36:50.492 },{ 00:36:50.492 "params": { 00:36:50.492 "name": "Nvme1", 00:36:50.492 "trtype": "tcp", 00:36:50.493 "traddr": "10.0.0.2", 00:36:50.493 "adrfam": "ipv4", 00:36:50.493 "trsvcid": "4420", 00:36:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:50.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:50.493 "hdgst": false, 00:36:50.493 "ddgst": false 00:36:50.493 }, 00:36:50.493 "method": "bdev_nvme_attach_controller" 00:36:50.493 },{ 00:36:50.493 "params": { 00:36:50.493 "name": "Nvme2", 00:36:50.493 "trtype": "tcp", 00:36:50.493 "traddr": "10.0.0.2", 00:36:50.493 "adrfam": "ipv4", 00:36:50.493 "trsvcid": "4420", 00:36:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:50.493 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:50.493 "hdgst": false, 00:36:50.493 "ddgst": false 00:36:50.493 }, 00:36:50.493 "method": "bdev_nvme_attach_controller" 00:36:50.493 }' 00:36:50.493 20:54:08 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:50.493 20:54:08 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:50.493 20:54:08 -- common/autotest_common.sh@1326 -- # break 00:36:50.493 20:54:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:50.493 20:54:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.070 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:51.070 ... 00:36:51.070 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:51.070 ... 00:36:51.070 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:51.070 ... 00:36:51.070 fio-3.35 00:36:51.070 Starting 24 threads 00:36:51.070 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.727 [2024-04-26 20:54:10.046427] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:51.727 [2024-04-26 20:54:10.046500] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:03.926 00:37:03.926 filename0: (groupid=0, jobs=1): err= 0: pid=3807559: Fri Apr 26 20:54:20 2024 00:37:03.926 read: IOPS=548, BW=2195KiB/s (2248kB/s)(21.5MiB/10018msec) 00:37:03.926 slat (usec): min=4, max=148, avg=17.90, stdev=17.38 00:37:03.926 clat (usec): min=2979, max=43734, avg=29017.99, stdev=3525.16 00:37:03.926 lat (usec): min=2987, max=43753, avg=29035.90, stdev=3526.32 00:37:03.926 clat percentiles (usec): 00:37:03.926 | 1.00th=[ 9765], 5.00th=[25297], 10.00th=[27919], 20.00th=[28705], 00:37:03.926 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:37:03.926 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30802], 95.00th=[31851], 00:37:03.926 | 99.00th=[34866], 99.50th=[35390], 99.90th=[42206], 99.95th=[42206], 00:37:03.926 | 99.99th=[43779] 00:37:03.926 bw ( KiB/s): min= 2048, max= 2480, per=4.27%, avg=2193.60, stdev=95.67, samples=20 00:37:03.926 iops : min= 512, max= 620, avg=548.40, stdev=23.92, samples=20 00:37:03.926 lat (msec) : 4=0.29%, 10=0.76%, 20=2.20%, 50=96.74% 00:37:03.926 cpu : usr=98.87%, sys=0.71%, ctx=65, majf=0, minf=1636 00:37:03.926 IO depths : 1=2.7%, 2=8.3%, 4=22.9%, 8=56.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:37:03.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.926 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.926 issued rwts: total=5498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.926 filename0: (groupid=0, jobs=1): err= 0: pid=3807560: Fri Apr 26 20:54:20 2024 00:37:03.926 read: IOPS=545, BW=2182KiB/s (2235kB/s)(21.4MiB/10025msec) 00:37:03.926 slat (usec): min=4, max=204, avg=27.51, stdev=33.21 00:37:03.926 clat (usec): min=4358, max=52513, avg=29134.35, stdev=3798.93 00:37:03.926 lat (usec): min=4369, max=52527, avg=29161.86, stdev=3800.18 00:37:03.926 clat percentiles (usec): 00:37:03.926 | 1.00th=[ 8717], 5.00th=[26346], 10.00th=[27919], 20.00th=[28443], 00:37:03.926 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:37:03.926 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:37:03.926 | 99.00th=[43254], 99.50th=[49021], 99.90th=[50070], 99.95th=[51119], 00:37:03.926 | 99.99th=[52691] 00:37:03.926 bw ( KiB/s): min= 2019, max= 2352, per=4.25%, avg=2179.75, stdev=90.84, samples=20 00:37:03.926 iops : min= 504, max= 588, avg=544.90, stdev=22.78, samples=20 00:37:03.926 lat (msec) : 10=1.43%, 20=1.46%, 50=97.00%, 100=0.11% 00:37:03.926 cpu : usr=99.16%, sys=0.42%, ctx=17, majf=0, minf=1633 00:37:03.926 IO depths : 1=5.0%, 2=10.8%, 4=23.5%, 8=53.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:37:03.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.926 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.926 issued rwts: total=5469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.926 filename0: (groupid=0, jobs=1): err= 0: pid=3807561: Fri Apr 26 20:54:20 2024 00:37:03.926 read: IOPS=538, BW=2156KiB/s (2208kB/s)(21.1MiB/10008msec) 00:37:03.926 slat (usec): min=6, max=217, avg=42.55, stdev=32.92 00:37:03.926 clat (usec): min=7467, max=64237, avg=29320.92, stdev=3744.74 00:37:03.926 lat (usec): min=7489, max=64262, avg=29363.47, stdev=3743.42 00:37:03.926 clat percentiles (usec): 00:37:03.926 | 1.00th=[16909], 5.00th=[26346], 10.00th=[27657], 20.00th=[28443], 00:37:03.926 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29230], 60.00th=[29492], 00:37:03.926 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[32113], 00:37:03.926 | 99.00th=[44303], 99.50th=[53216], 99.90th=[64226], 99.95th=[64226], 00:37:03.926 | 99.99th=[64226] 00:37:03.926 bw ( KiB/s): min= 1923, max= 2416, per=4.20%, avg=2156.79, stdev=96.27, samples=19 00:37:03.926 iops : min= 480, max= 604, avg=539.16, stdev=24.17, samples=19 00:37:03.926 lat (msec) : 10=0.48%, 20=1.26%, 50=97.44%, 100=0.82% 00:37:03.926 cpu : usr=98.96%, sys=0.54%, ctx=105, majf=0, minf=1631 00:37:03.926 IO depths : 1=3.9%, 2=8.8%, 4=20.5%, 8=57.7%, 16=9.2%, 32=0.0%, >=64=0.0% 00:37:03.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.926 complete : 0=0.0%, 4=93.1%, 8=1.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.926 issued rwts: total=5394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.926 filename0: (groupid=0, jobs=1): err= 0: pid=3807562: Fri Apr 26 20:54:20 2024 00:37:03.926 read: IOPS=536, BW=2145KiB/s (2197kB/s)(21.0MiB/10025msec) 00:37:03.926 slat (usec): min=6, max=219, avg=33.73, stdev=36.45 00:37:03.926 clat (usec): min=23033, max=37427, avg=29596.03, stdev=1233.42 00:37:03.926 lat (usec): min=23042, max=37441, avg=29629.76, stdev=1227.45 00:37:03.926 clat percentiles (usec): 00:37:03.926 | 1.00th=[27132], 5.00th=[27919], 10.00th=[28443], 20.00th=[28705], 00:37:03.926 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:37:03.926 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:37:03.926 | 99.00th=[35390], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:37:03.926 | 99.99th=[37487] 00:37:03.927 bw ( KiB/s): min= 2019, max= 2176, per=4.18%, avg=2142.55, stdev=59.74, samples=20 00:37:03.927 iops : min= 504, max= 544, avg=535.60, stdev=15.02, samples=20 00:37:03.927 lat (msec) : 50=100.00% 00:37:03.927 cpu : usr=98.99%, sys=0.62%, ctx=15, majf=0, minf=1635 00:37:03.927 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:03.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.927 filename0: (groupid=0, jobs=1): err= 0: pid=3807563: Fri Apr 26 20:54:20 2024 00:37:03.927 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10016msec) 00:37:03.927 slat (usec): min=5, max=220, avg=65.98, stdev=38.27 00:37:03.927 clat (usec): min=17936, max=63032, avg=29395.63, stdev=2317.92 00:37:03.927 lat (usec): min=17944, max=63060, avg=29461.60, stdev=2313.48 00:37:03.927 clat percentiles (usec): 00:37:03.927 | 1.00th=[26870], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:37:03.927 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.927 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30802], 00:37:03.927 | 99.00th=[33424], 99.50th=[49546], 99.90th=[60031], 99.95th=[62653], 00:37:03.927 | 99.99th=[63177] 00:37:03.927 bw ( KiB/s): min= 2039, max= 2176, per=4.15%, avg=2130.95, stdev=63.03, samples=20 00:37:03.927 iops : min= 509, max= 544, avg=532.70, stdev=15.82, samples=20 00:37:03.927 lat (msec) : 20=0.04%, 50=99.66%, 100=0.30% 00:37:03.927 cpu : usr=98.99%, sys=0.60%, ctx=18, majf=0, minf=1635 00:37:03.927 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:03.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.927 filename0: (groupid=0, jobs=1): err= 0: pid=3807564: Fri Apr 26 20:54:20 2024 00:37:03.927 read: IOPS=537, BW=2152KiB/s (2203kB/s)(21.1MiB/10023msec) 00:37:03.927 slat (usec): min=6, max=297, avg=61.49, stdev=38.90 00:37:03.927 clat (usec): min=7752, max=48301, avg=29242.35, stdev=2303.67 00:37:03.927 lat (usec): min=7763, max=48313, avg=29303.84, stdev=2302.17 00:37:03.927 clat percentiles (usec): 00:37:03.927 | 1.00th=[19792], 5.00th=[27395], 10.00th=[27919], 20.00th=[28443], 00:37:03.927 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.927 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30802], 00:37:03.927 | 99.00th=[35390], 99.50th=[41681], 99.90th=[48497], 99.95th=[48497], 00:37:03.927 | 99.99th=[48497] 00:37:03.927 bw ( KiB/s): min= 2031, max= 2192, per=4.19%, avg=2149.55, stdev=54.63, samples=20 00:37:03.927 iops : min= 507, max= 548, avg=537.35, stdev=13.74, samples=20 00:37:03.927 lat (msec) : 10=0.04%, 20=1.21%, 50=98.76% 00:37:03.927 cpu : usr=97.54%, sys=1.22%, ctx=51, majf=0, minf=1637 00:37:03.927 IO depths : 1=5.9%, 2=11.9%, 4=24.5%, 8=51.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:03.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.927 filename0: (groupid=0, jobs=1): err= 0: pid=3807565: Fri Apr 26 20:54:20 2024 00:37:03.927 read: IOPS=520, BW=2083KiB/s (2133kB/s)(20.4MiB/10007msec) 00:37:03.927 slat (usec): min=6, max=222, avg=38.85, stdev=43.07 00:37:03.927 clat (usec): min=7818, max=63150, avg=30433.12, stdev=4872.56 00:37:03.927 lat (usec): min=7826, max=63177, avg=30471.97, stdev=4868.40 00:37:03.927 clat percentiles (usec): 00:37:03.927 | 1.00th=[18220], 5.00th=[27132], 10.00th=[27919], 20.00th=[28705], 00:37:03.927 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:37:03.927 | 70.00th=[30278], 80.00th=[30540], 90.00th=[32900], 95.00th=[40633], 00:37:03.927 | 99.00th=[50070], 99.50th=[52691], 99.90th=[63177], 99.95th=[63177], 00:37:03.927 | 99.99th=[63177] 00:37:03.927 bw ( KiB/s): min= 1888, max= 2176, per=4.07%, avg=2086.74, stdev=81.51, samples=19 00:37:03.927 iops : min= 472, max= 544, avg=521.68, stdev=20.38, samples=19 00:37:03.927 lat (msec) : 10=0.38%, 20=1.23%, 50=97.29%, 100=1.09% 00:37:03.927 cpu : usr=99.05%, sys=0.54%, ctx=14, majf=0, minf=1635 00:37:03.927 IO depths : 1=1.1%, 2=3.0%, 4=11.2%, 8=70.4%, 16=14.3%, 32=0.0%, >=64=0.0% 00:37:03.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 complete : 0=0.0%, 4=91.8%, 8=4.8%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 issued rwts: total=5212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.927 filename0: (groupid=0, jobs=1): err= 0: pid=3807566: Fri Apr 26 20:54:20 2024 00:37:03.927 read: IOPS=531, BW=2124KiB/s (2175kB/s)(20.8MiB/10014msec) 00:37:03.927 slat (usec): min=6, max=227, avg=47.38, stdev=43.25 00:37:03.927 clat (usec): min=5980, max=69871, avg=29702.74, stdev=5060.94 00:37:03.927 lat (usec): min=6024, max=69896, avg=29750.12, stdev=5058.07 00:37:03.927 clat percentiles (usec): 00:37:03.927 | 1.00th=[ 8979], 5.00th=[26870], 10.00th=[27919], 20.00th=[28705], 00:37:03.927 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:37:03.927 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30802], 95.00th=[33424], 00:37:03.927 | 99.00th=[50594], 99.50th=[52691], 99.90th=[69731], 99.95th=[69731], 00:37:03.927 | 99.99th=[69731] 00:37:03.927 bw ( KiB/s): min= 1923, max= 2256, per=4.14%, avg=2124.79, stdev=83.51, samples=19 00:37:03.927 iops : min= 480, max= 564, avg=531.16, stdev=20.98, samples=19 00:37:03.927 lat (msec) : 10=1.18%, 20=1.43%, 50=95.68%, 100=1.71% 00:37:03.927 cpu : usr=99.10%, sys=0.48%, ctx=15, majf=0, minf=1633 00:37:03.927 IO depths : 1=3.5%, 2=7.4%, 4=17.5%, 8=61.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:37:03.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 complete : 0=0.0%, 4=92.6%, 8=2.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 issued rwts: total=5318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.927 filename1: (groupid=0, jobs=1): err= 0: pid=3807567: Fri Apr 26 20:54:20 2024 00:37:03.927 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10006msec) 00:37:03.927 slat (usec): min=5, max=219, avg=38.31, stdev=37.12 00:37:03.927 clat (usec): min=6702, max=65587, avg=30124.75, stdev=6063.56 00:37:03.927 lat (usec): min=6713, max=65613, avg=30163.06, stdev=6059.75 00:37:03.927 clat percentiles (usec): 00:37:03.927 | 1.00th=[ 8455], 5.00th=[25035], 10.00th=[27395], 20.00th=[28443], 00:37:03.927 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:37:03.927 | 70.00th=[30016], 80.00th=[30540], 90.00th=[33162], 95.00th=[41157], 00:37:03.927 | 99.00th=[51643], 99.50th=[52691], 99.90th=[65799], 99.95th=[65799], 00:37:03.927 | 99.99th=[65799] 00:37:03.927 bw ( KiB/s): min= 1776, max= 2240, per=4.11%, avg=2106.32, stdev=104.69, samples=19 00:37:03.927 iops : min= 444, max= 560, avg=526.58, stdev=26.17, samples=19 00:37:03.927 lat (msec) : 10=1.60%, 20=2.00%, 50=93.94%, 100=2.47% 00:37:03.927 cpu : usr=98.93%, sys=0.65%, ctx=15, majf=0, minf=1634 00:37:03.927 IO depths : 1=1.6%, 2=5.1%, 4=16.5%, 8=64.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:37:03.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 complete : 0=0.0%, 4=92.4%, 8=3.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 issued rwts: total=5262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.927 filename1: (groupid=0, jobs=1): err= 0: pid=3807568: Fri Apr 26 20:54:20 2024 00:37:03.927 read: IOPS=544, BW=2179KiB/s (2232kB/s)(21.3MiB/10014msec) 00:37:03.927 slat (usec): min=5, max=141, avg=18.05, stdev=19.38 00:37:03.927 clat (usec): min=8742, max=49126, avg=29213.24, stdev=2329.04 00:37:03.927 lat (usec): min=8751, max=49153, avg=29231.29, stdev=2329.61 00:37:03.927 clat percentiles (usec): 00:37:03.927 | 1.00th=[18482], 5.00th=[27132], 10.00th=[28181], 20.00th=[28705], 00:37:03.927 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:37:03.927 | 70.00th=[29754], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:03.927 | 99.00th=[33162], 99.50th=[37487], 99.90th=[46924], 99.95th=[47449], 00:37:03.927 | 99.99th=[49021] 00:37:03.927 bw ( KiB/s): min= 2048, max= 2304, per=4.25%, avg=2182.74, stdev=50.12, samples=19 00:37:03.927 iops : min= 512, max= 576, avg=545.68, stdev=12.53, samples=19 00:37:03.927 lat (msec) : 10=0.16%, 20=1.21%, 50=98.63% 00:37:03.927 cpu : usr=97.25%, sys=1.35%, ctx=131, majf=0, minf=1637 00:37:03.927 IO depths : 1=5.3%, 2=11.4%, 4=24.4%, 8=51.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:03.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 issued rwts: total=5456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.927 filename1: (groupid=0, jobs=1): err= 0: pid=3807569: Fri Apr 26 20:54:20 2024 00:37:03.927 read: IOPS=536, BW=2147KiB/s (2198kB/s)(21.0MiB/10016msec) 00:37:03.927 slat (usec): min=6, max=212, avg=59.77, stdev=36.94 00:37:03.927 clat (usec): min=15022, max=47769, avg=29277.18, stdev=1937.59 00:37:03.927 lat (usec): min=15035, max=47863, avg=29336.95, stdev=1935.90 00:37:03.927 clat percentiles (usec): 00:37:03.927 | 1.00th=[26084], 5.00th=[27395], 10.00th=[27919], 20.00th=[28443], 00:37:03.927 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.927 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[31065], 00:37:03.927 | 99.00th=[35390], 99.50th=[43779], 99.90th=[44827], 99.95th=[45351], 00:37:03.927 | 99.99th=[47973] 00:37:03.927 bw ( KiB/s): min= 2032, max= 2192, per=4.19%, avg=2149.05, stdev=54.14, samples=19 00:37:03.927 iops : min= 508, max= 548, avg=537.26, stdev=13.54, samples=19 00:37:03.927 lat (msec) : 20=0.63%, 50=99.37% 00:37:03.927 cpu : usr=99.10%, sys=0.50%, ctx=16, majf=0, minf=1635 00:37:03.927 IO depths : 1=4.6%, 2=10.2%, 4=23.0%, 8=54.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:37:03.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.927 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.928 filename1: (groupid=0, jobs=1): err= 0: pid=3807570: Fri Apr 26 20:54:20 2024 00:37:03.928 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10007msec) 00:37:03.928 slat (usec): min=6, max=218, avg=36.23, stdev=37.14 00:37:03.928 clat (usec): min=6557, max=62874, avg=30259.57, stdev=5169.19 00:37:03.928 lat (usec): min=6564, max=62900, avg=30295.80, stdev=5166.22 00:37:03.928 clat percentiles (usec): 00:37:03.928 | 1.00th=[13173], 5.00th=[26870], 10.00th=[27919], 20.00th=[28705], 00:37:03.928 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29492], 60.00th=[30016], 00:37:03.928 | 70.00th=[30278], 80.00th=[30540], 90.00th=[32900], 95.00th=[39584], 00:37:03.928 | 99.00th=[51643], 99.50th=[54789], 99.90th=[62653], 99.95th=[62653], 00:37:03.928 | 99.99th=[62653] 00:37:03.928 bw ( KiB/s): min= 1984, max= 2192, per=4.09%, avg=2099.37, stdev=62.56, samples=19 00:37:03.928 iops : min= 496, max= 548, avg=524.84, stdev=15.64, samples=19 00:37:03.928 lat (msec) : 10=0.69%, 20=1.79%, 50=95.92%, 100=1.60% 00:37:03.928 cpu : usr=98.91%, sys=0.69%, ctx=10, majf=0, minf=1633 00:37:03.928 IO depths : 1=0.1%, 2=2.8%, 4=14.3%, 8=68.6%, 16=14.2%, 32=0.0%, >=64=0.0% 00:37:03.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 complete : 0=0.0%, 4=92.2%, 8=3.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 issued rwts: total=5250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.928 filename1: (groupid=0, jobs=1): err= 0: pid=3807571: Fri Apr 26 20:54:20 2024 00:37:03.928 read: IOPS=535, BW=2144KiB/s (2195kB/s)(21.0MiB/10012msec) 00:37:03.928 slat (usec): min=6, max=200, avg=56.59, stdev=36.04 00:37:03.928 clat (usec): min=6992, max=71224, avg=29343.63, stdev=3125.56 00:37:03.928 lat (usec): min=7004, max=71250, avg=29400.22, stdev=3124.71 00:37:03.928 clat percentiles (usec): 00:37:03.928 | 1.00th=[22938], 5.00th=[27395], 10.00th=[27919], 20.00th=[28443], 00:37:03.928 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.928 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30802], 00:37:03.928 | 99.00th=[35914], 99.50th=[50594], 99.90th=[68682], 99.95th=[70779], 00:37:03.928 | 99.99th=[70779] 00:37:03.928 bw ( KiB/s): min= 1920, max= 2192, per=4.18%, avg=2142.32, stdev=72.13, samples=19 00:37:03.928 iops : min= 480, max= 548, avg=535.58, stdev=18.03, samples=19 00:37:03.928 lat (msec) : 10=0.28%, 20=0.60%, 50=98.62%, 100=0.50% 00:37:03.928 cpu : usr=99.15%, sys=0.44%, ctx=15, majf=0, minf=1633 00:37:03.928 IO depths : 1=4.2%, 2=9.5%, 4=22.6%, 8=54.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:37:03.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 issued rwts: total=5366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.928 filename1: (groupid=0, jobs=1): err= 0: pid=3807572: Fri Apr 26 20:54:20 2024 00:37:03.928 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10008msec) 00:37:03.928 slat (usec): min=6, max=199, avg=52.76, stdev=41.54 00:37:03.928 clat (usec): min=7988, max=63820, avg=30117.77, stdev=5390.37 00:37:03.928 lat (usec): min=8005, max=63846, avg=30170.53, stdev=5385.01 00:37:03.928 clat percentiles (usec): 00:37:03.928 | 1.00th=[ 9241], 5.00th=[26870], 10.00th=[27657], 20.00th=[28443], 00:37:03.928 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:37:03.928 | 70.00th=[30016], 80.00th=[30278], 90.00th=[32900], 95.00th=[39584], 00:37:03.928 | 99.00th=[51119], 99.50th=[51643], 99.90th=[63701], 99.95th=[63701], 00:37:03.928 | 99.99th=[63701] 00:37:03.928 bw ( KiB/s): min= 1792, max= 2176, per=4.07%, avg=2089.42, stdev=109.38, samples=19 00:37:03.928 iops : min= 448, max= 544, avg=522.32, stdev=27.41, samples=19 00:37:03.928 lat (msec) : 10=1.30%, 20=0.99%, 50=95.62%, 100=2.08% 00:37:03.928 cpu : usr=99.04%, sys=0.55%, ctx=15, majf=0, minf=1633 00:37:03.928 IO depths : 1=3.4%, 2=8.1%, 4=21.4%, 8=57.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:03.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 issued rwts: total=5234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.928 filename1: (groupid=0, jobs=1): err= 0: pid=3807573: Fri Apr 26 20:54:20 2024 00:37:03.928 read: IOPS=535, BW=2141KiB/s (2192kB/s)(20.9MiB/10014msec) 00:37:03.928 slat (usec): min=5, max=206, avg=65.16, stdev=37.90 00:37:03.928 clat (usec): min=20139, max=61208, avg=29335.78, stdev=2063.48 00:37:03.928 lat (usec): min=20147, max=61234, avg=29400.94, stdev=2057.71 00:37:03.928 clat percentiles (usec): 00:37:03.928 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27919], 20.00th=[28443], 00:37:03.928 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.928 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:03.928 | 99.00th=[32900], 99.50th=[35914], 99.90th=[61080], 99.95th=[61080], 00:37:03.928 | 99.99th=[61080] 00:37:03.928 bw ( KiB/s): min= 2039, max= 2176, per=4.17%, avg=2137.15, stdev=60.92, samples=20 00:37:03.928 iops : min= 509, max= 544, avg=534.25, stdev=15.29, samples=20 00:37:03.928 lat (msec) : 50=99.70%, 100=0.30% 00:37:03.928 cpu : usr=99.17%, sys=0.41%, ctx=16, majf=0, minf=1634 00:37:03.928 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:03.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.928 filename1: (groupid=0, jobs=1): err= 0: pid=3807574: Fri Apr 26 20:54:20 2024 00:37:03.928 read: IOPS=538, BW=2154KiB/s (2206kB/s)(21.1MiB/10023msec) 00:37:03.928 slat (usec): min=5, max=220, avg=45.91, stdev=41.82 00:37:03.928 clat (usec): min=9576, max=45140, avg=29368.97, stdev=1872.05 00:37:03.928 lat (usec): min=9588, max=45152, avg=29414.88, stdev=1866.77 00:37:03.928 clat percentiles (usec): 00:37:03.928 | 1.00th=[23987], 5.00th=[27395], 10.00th=[28181], 20.00th=[28705], 00:37:03.928 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:37:03.928 | 70.00th=[29754], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:37:03.928 | 99.00th=[35390], 99.50th=[36439], 99.90th=[43254], 99.95th=[43254], 00:37:03.928 | 99.99th=[45351] 00:37:03.928 bw ( KiB/s): min= 2031, max= 2224, per=4.19%, avg=2151.95, stdev=56.62, samples=20 00:37:03.928 iops : min= 507, max= 556, avg=537.95, stdev=14.24, samples=20 00:37:03.928 lat (msec) : 10=0.07%, 20=0.70%, 50=99.22% 00:37:03.928 cpu : usr=99.14%, sys=0.45%, ctx=17, majf=0, minf=1637 00:37:03.928 IO depths : 1=5.3%, 2=11.2%, 4=23.8%, 8=52.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:03.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 issued rwts: total=5398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.928 filename2: (groupid=0, jobs=1): err= 0: pid=3807575: Fri Apr 26 20:54:20 2024 00:37:03.928 read: IOPS=545, BW=2181KiB/s (2233kB/s)(21.3MiB/10009msec) 00:37:03.928 slat (usec): min=4, max=217, avg=37.71, stdev=36.89 00:37:03.928 clat (usec): min=7030, max=64385, avg=29006.93, stdev=4778.95 00:37:03.928 lat (usec): min=7043, max=64404, avg=29044.64, stdev=4780.94 00:37:03.928 clat percentiles (usec): 00:37:03.928 | 1.00th=[15926], 5.00th=[20841], 10.00th=[25822], 20.00th=[28181], 00:37:03.928 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.928 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30802], 95.00th=[34341], 00:37:03.928 | 99.00th=[47449], 99.50th=[50070], 99.90th=[64226], 99.95th=[64226], 00:37:03.928 | 99.99th=[64226] 00:37:03.928 bw ( KiB/s): min= 1920, max= 2416, per=4.26%, avg=2183.16, stdev=105.22, samples=19 00:37:03.928 iops : min= 480, max= 604, avg=545.79, stdev=26.31, samples=19 00:37:03.928 lat (msec) : 10=0.49%, 20=4.23%, 50=94.81%, 100=0.46% 00:37:03.928 cpu : usr=98.88%, sys=0.70%, ctx=15, majf=0, minf=1634 00:37:03.928 IO depths : 1=3.2%, 2=7.2%, 4=17.1%, 8=62.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:37:03.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 complete : 0=0.0%, 4=92.2%, 8=3.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.928 issued rwts: total=5457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.928 filename2: (groupid=0, jobs=1): err= 0: pid=3807576: Fri Apr 26 20:54:20 2024 00:37:03.928 read: IOPS=531, BW=2127KiB/s (2178kB/s)(20.8MiB/10019msec) 00:37:03.928 slat (usec): min=4, max=210, avg=61.46, stdev=38.10 00:37:03.928 clat (usec): min=11258, max=63711, avg=29562.83, stdev=2764.02 00:37:03.928 lat (usec): min=11265, max=63737, avg=29624.29, stdev=2758.13 00:37:03.928 clat percentiles (usec): 00:37:03.928 | 1.00th=[26870], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:37:03.928 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.928 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[31327], 00:37:03.928 | 99.00th=[42730], 99.50th=[47973], 99.90th=[63701], 99.95th=[63701], 00:37:03.928 | 99.99th=[63701] 00:37:03.928 bw ( KiB/s): min= 2027, max= 2176, per=4.14%, avg=2123.75, stdev=66.01, samples=20 00:37:03.928 iops : min= 506, max= 544, avg=530.90, stdev=16.56, samples=20 00:37:03.928 lat (msec) : 20=0.15%, 50=99.55%, 100=0.30% 00:37:03.928 cpu : usr=99.12%, sys=0.46%, ctx=20, majf=0, minf=1633 00:37:03.928 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:03.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.929 filename2: (groupid=0, jobs=1): err= 0: pid=3807577: Fri Apr 26 20:54:20 2024 00:37:03.929 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10001msec) 00:37:03.929 slat (usec): min=5, max=192, avg=35.54, stdev=36.83 00:37:03.929 clat (usec): min=12027, max=43748, avg=29519.00, stdev=1492.13 00:37:03.929 lat (usec): min=12042, max=43764, avg=29554.54, stdev=1486.17 00:37:03.929 clat percentiles (usec): 00:37:03.929 | 1.00th=[26870], 5.00th=[27657], 10.00th=[28181], 20.00th=[28705], 00:37:03.929 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:37:03.929 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[31327], 00:37:03.929 | 99.00th=[33817], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:37:03.929 | 99.99th=[43779] 00:37:03.929 bw ( KiB/s): min= 2048, max= 2176, per=4.20%, avg=2155.79, stdev=47.95, samples=19 00:37:03.929 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:37:03.929 lat (msec) : 20=0.41%, 50=99.59% 00:37:03.929 cpu : usr=98.99%, sys=0.59%, ctx=15, majf=0, minf=1636 00:37:03.929 IO depths : 1=5.2%, 2=10.7%, 4=23.1%, 8=53.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:03.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.929 filename2: (groupid=0, jobs=1): err= 0: pid=3807578: Fri Apr 26 20:54:20 2024 00:37:03.929 read: IOPS=538, BW=2152KiB/s (2204kB/s)(21.1MiB/10021msec) 00:37:03.929 slat (usec): min=5, max=200, avg=59.66, stdev=38.66 00:37:03.929 clat (usec): min=9209, max=44413, avg=29245.88, stdev=1920.17 00:37:03.929 lat (usec): min=9220, max=44423, avg=29305.54, stdev=1918.55 00:37:03.929 clat percentiles (usec): 00:37:03.929 | 1.00th=[20579], 5.00th=[27395], 10.00th=[27919], 20.00th=[28443], 00:37:03.929 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.929 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30802], 00:37:03.929 | 99.00th=[33817], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:37:03.929 | 99.99th=[44303] 00:37:03.929 bw ( KiB/s): min= 2048, max= 2176, per=4.19%, avg=2150.40, stdev=52.53, samples=20 00:37:03.929 iops : min= 512, max= 544, avg=537.60, stdev=13.13, samples=20 00:37:03.929 lat (msec) : 10=0.13%, 20=0.69%, 50=99.18% 00:37:03.929 cpu : usr=99.11%, sys=0.48%, ctx=15, majf=0, minf=1633 00:37:03.929 IO depths : 1=5.7%, 2=11.6%, 4=24.1%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:03.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.929 filename2: (groupid=0, jobs=1): err= 0: pid=3807579: Fri Apr 26 20:54:20 2024 00:37:03.929 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10022msec) 00:37:03.929 slat (usec): min=5, max=220, avg=68.09, stdev=39.05 00:37:03.929 clat (usec): min=17872, max=58765, avg=29369.33, stdev=2245.95 00:37:03.929 lat (usec): min=17880, max=58792, avg=29437.41, stdev=2240.71 00:37:03.929 clat percentiles (usec): 00:37:03.929 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27919], 20.00th=[28443], 00:37:03.929 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.929 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30802], 00:37:03.929 | 99.00th=[36439], 99.50th=[45351], 99.90th=[58983], 99.95th=[58983], 00:37:03.929 | 99.99th=[58983] 00:37:03.929 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2131.20, stdev=61.11, samples=20 00:37:03.929 iops : min= 512, max= 544, avg=532.80, stdev=15.28, samples=20 00:37:03.929 lat (msec) : 20=0.11%, 50=99.59%, 100=0.30% 00:37:03.929 cpu : usr=99.02%, sys=0.56%, ctx=15, majf=0, minf=1634 00:37:03.929 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:03.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.929 filename2: (groupid=0, jobs=1): err= 0: pid=3807580: Fri Apr 26 20:54:20 2024 00:37:03.929 read: IOPS=531, BW=2127KiB/s (2178kB/s)(20.8MiB/10015msec) 00:37:03.929 slat (usec): min=4, max=217, avg=39.48, stdev=26.64 00:37:03.929 clat (usec): min=6301, max=70567, avg=29777.67, stdev=4375.34 00:37:03.929 lat (usec): min=6310, max=70588, avg=29817.16, stdev=4374.35 00:37:03.929 clat percentiles (usec): 00:37:03.929 | 1.00th=[15664], 5.00th=[27132], 10.00th=[28181], 20.00th=[28705], 00:37:03.929 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:37:03.929 | 70.00th=[30016], 80.00th=[30278], 90.00th=[31327], 95.00th=[34341], 00:37:03.929 | 99.00th=[47449], 99.50th=[50070], 99.90th=[70779], 99.95th=[70779], 00:37:03.929 | 99.99th=[70779] 00:37:03.929 bw ( KiB/s): min= 1920, max= 2248, per=4.15%, avg=2129.26, stdev=79.61, samples=19 00:37:03.929 iops : min= 480, max= 562, avg=532.32, stdev=19.90, samples=19 00:37:03.929 lat (msec) : 10=0.21%, 20=2.37%, 50=96.98%, 100=0.45% 00:37:03.929 cpu : usr=99.11%, sys=0.49%, ctx=20, majf=0, minf=1635 00:37:03.929 IO depths : 1=2.7%, 2=6.6%, 4=18.9%, 8=61.2%, 16=10.6%, 32=0.0%, >=64=0.0% 00:37:03.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 complete : 0=0.0%, 4=93.0%, 8=2.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 issued rwts: total=5325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.929 filename2: (groupid=0, jobs=1): err= 0: pid=3807581: Fri Apr 26 20:54:20 2024 00:37:03.929 read: IOPS=531, BW=2125KiB/s (2176kB/s)(20.8MiB/10009msec) 00:37:03.929 slat (usec): min=6, max=363, avg=55.11, stdev=53.49 00:37:03.929 clat (usec): min=5650, max=64671, avg=29592.08, stdev=3951.81 00:37:03.929 lat (usec): min=5660, max=64698, avg=29647.19, stdev=3948.69 00:37:03.929 clat percentiles (usec): 00:37:03.929 | 1.00th=[17171], 5.00th=[27132], 10.00th=[27657], 20.00th=[28443], 00:37:03.929 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:37:03.929 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30802], 95.00th=[33162], 00:37:03.929 | 99.00th=[47449], 99.50th=[52691], 99.90th=[64750], 99.95th=[64750], 00:37:03.929 | 99.99th=[64750] 00:37:03.929 bw ( KiB/s): min= 1920, max= 2256, per=4.15%, avg=2128.84, stdev=86.73, samples=19 00:37:03.929 iops : min= 480, max= 564, avg=532.21, stdev=21.68, samples=19 00:37:03.929 lat (msec) : 10=0.19%, 20=1.32%, 50=97.65%, 100=0.85% 00:37:03.929 cpu : usr=99.08%, sys=0.51%, ctx=14, majf=0, minf=1636 00:37:03.929 IO depths : 1=1.9%, 2=6.5%, 4=20.1%, 8=60.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:37:03.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 complete : 0=0.0%, 4=93.3%, 8=1.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 issued rwts: total=5318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.929 filename2: (groupid=0, jobs=1): err= 0: pid=3807582: Fri Apr 26 20:54:20 2024 00:37:03.929 read: IOPS=531, BW=2128KiB/s (2179kB/s)(20.8MiB/10017msec) 00:37:03.929 slat (usec): min=6, max=223, avg=57.43, stdev=33.33 00:37:03.929 clat (usec): min=16586, max=68660, avg=29566.93, stdev=2903.14 00:37:03.929 lat (usec): min=16598, max=68689, avg=29624.37, stdev=2897.38 00:37:03.929 clat percentiles (usec): 00:37:03.929 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27919], 20.00th=[28443], 00:37:03.929 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29230], 60.00th=[29492], 00:37:03.929 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[31065], 00:37:03.929 | 99.00th=[41157], 99.50th=[44827], 99.90th=[68682], 99.95th=[68682], 00:37:03.929 | 99.99th=[68682] 00:37:03.929 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2127.75, stdev=80.45, samples=20 00:37:03.929 iops : min= 480, max= 544, avg=531.90, stdev=20.21, samples=20 00:37:03.929 lat (msec) : 20=0.15%, 50=99.51%, 100=0.34% 00:37:03.929 cpu : usr=98.67%, sys=0.82%, ctx=111, majf=0, minf=1636 00:37:03.929 IO depths : 1=5.4%, 2=11.2%, 4=23.9%, 8=52.3%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:03.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.929 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:03.929 00:37:03.929 Run status group 0 (all jobs): 00:37:03.929 READ: bw=50.1MiB/s (52.5MB/s), 2083KiB/s-2195KiB/s (2133kB/s-2248kB/s), io=502MiB (527MB), run=10001-10025msec 00:37:03.929 ----------------------------------------------------- 00:37:03.929 Suppressions used: 00:37:03.929 count bytes template 00:37:03.929 45 402 /usr/src/fio/parse.c 00:37:03.929 1 8 libtcmalloc_minimal.so 00:37:03.929 1 904 libcrypto.so 00:37:03.929 ----------------------------------------------------- 00:37:03.929 00:37:03.929 20:54:21 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:03.929 20:54:21 -- target/dif.sh@43 -- # local sub 00:37:03.929 20:54:21 -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.929 20:54:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:03.929 20:54:21 -- target/dif.sh@36 -- # local sub_id=0 00:37:03.929 20:54:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:03.929 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.929 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.929 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.929 20:54:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:03.929 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.929 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.929 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.929 20:54:21 -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.929 20:54:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:03.929 20:54:21 -- target/dif.sh@36 -- # local sub_id=1 00:37:03.929 20:54:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:03.929 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.929 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.929 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.929 20:54:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:03.929 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.929 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.929 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.930 20:54:21 -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:03.930 20:54:21 -- target/dif.sh@36 -- # local sub_id=2 00:37:03.930 20:54:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@115 -- # NULL_DIF=1 00:37:03.930 20:54:21 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:03.930 20:54:21 -- target/dif.sh@115 -- # numjobs=2 00:37:03.930 20:54:21 -- target/dif.sh@115 -- # iodepth=8 00:37:03.930 20:54:21 -- target/dif.sh@115 -- # runtime=5 00:37:03.930 20:54:21 -- target/dif.sh@115 -- # files=1 00:37:03.930 20:54:21 -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:03.930 20:54:21 -- target/dif.sh@28 -- # local sub 00:37:03.930 20:54:21 -- target/dif.sh@30 -- # for sub in "$@" 00:37:03.930 20:54:21 -- target/dif.sh@31 -- # create_subsystem 0 00:37:03.930 20:54:21 -- target/dif.sh@18 -- # local sub_id=0 00:37:03.930 20:54:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 bdev_null0 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 [2024-04-26 20:54:21.127074] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@30 -- # for sub in "$@" 00:37:03.930 20:54:21 -- target/dif.sh@31 -- # create_subsystem 1 00:37:03.930 20:54:21 -- target/dif.sh@18 -- # local sub_id=1 00:37:03.930 20:54:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 bdev_null1 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:03.930 20:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:03.930 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:37:03.930 20:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:03.930 20:54:21 -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:03.930 20:54:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.930 20:54:21 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.930 20:54:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:37:03.930 20:54:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:03.930 20:54:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:37:03.930 20:54:21 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:03.930 20:54:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:37:03.930 20:54:21 -- common/autotest_common.sh@1320 -- # shift 00:37:03.930 20:54:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:37:03.930 20:54:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:03.930 20:54:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:03.930 20:54:21 -- nvmf/common.sh@520 -- # config=() 00:37:03.930 20:54:21 -- nvmf/common.sh@520 -- # local subsystem config 00:37:03.930 20:54:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:03.930 20:54:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:03.930 { 00:37:03.930 "params": { 00:37:03.930 "name": "Nvme$subsystem", 00:37:03.930 "trtype": "$TEST_TRANSPORT", 00:37:03.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:03.930 "adrfam": "ipv4", 00:37:03.930 "trsvcid": "$NVMF_PORT", 00:37:03.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:03.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:03.930 "hdgst": ${hdgst:-false}, 00:37:03.930 "ddgst": ${ddgst:-false} 00:37:03.930 }, 00:37:03.930 "method": "bdev_nvme_attach_controller" 00:37:03.930 } 00:37:03.930 EOF 00:37:03.930 )") 00:37:03.930 20:54:21 -- target/dif.sh@82 -- # gen_fio_conf 00:37:03.930 20:54:21 -- target/dif.sh@54 -- # local file 00:37:03.930 20:54:21 -- target/dif.sh@56 -- # cat 00:37:03.930 20:54:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:37:03.930 20:54:21 -- nvmf/common.sh@542 -- # cat 00:37:03.930 20:54:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:37:03.930 20:54:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:03.930 20:54:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:37:03.930 20:54:21 -- target/dif.sh@72 -- # (( file <= files )) 00:37:03.930 20:54:21 -- target/dif.sh@73 -- # cat 00:37:03.930 20:54:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:03.930 20:54:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:03.930 { 00:37:03.930 "params": { 00:37:03.930 "name": "Nvme$subsystem", 00:37:03.930 "trtype": "$TEST_TRANSPORT", 00:37:03.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:03.930 "adrfam": "ipv4", 00:37:03.930 "trsvcid": "$NVMF_PORT", 00:37:03.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:03.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:03.930 "hdgst": ${hdgst:-false}, 00:37:03.930 "ddgst": ${ddgst:-false} 00:37:03.930 }, 00:37:03.930 "method": "bdev_nvme_attach_controller" 00:37:03.930 } 00:37:03.930 EOF 00:37:03.930 )") 00:37:03.930 20:54:21 -- target/dif.sh@72 -- # (( file++ )) 00:37:03.930 20:54:21 -- target/dif.sh@72 -- # (( file <= files )) 00:37:03.930 20:54:21 -- nvmf/common.sh@542 -- # cat 00:37:03.930 20:54:21 -- nvmf/common.sh@544 -- # jq . 00:37:03.930 20:54:21 -- nvmf/common.sh@545 -- # IFS=, 00:37:03.930 20:54:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:03.930 "params": { 00:37:03.930 "name": "Nvme0", 00:37:03.930 "trtype": "tcp", 00:37:03.930 "traddr": "10.0.0.2", 00:37:03.930 "adrfam": "ipv4", 00:37:03.930 "trsvcid": "4420", 00:37:03.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:03.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:03.930 "hdgst": false, 00:37:03.930 "ddgst": false 00:37:03.930 }, 00:37:03.930 "method": "bdev_nvme_attach_controller" 00:37:03.930 },{ 00:37:03.930 "params": { 00:37:03.930 "name": "Nvme1", 00:37:03.930 "trtype": "tcp", 00:37:03.930 "traddr": "10.0.0.2", 00:37:03.930 "adrfam": "ipv4", 00:37:03.930 "trsvcid": "4420", 00:37:03.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:03.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:03.930 "hdgst": false, 00:37:03.930 "ddgst": false 00:37:03.930 }, 00:37:03.930 "method": "bdev_nvme_attach_controller" 00:37:03.930 }' 00:37:03.930 20:54:21 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:03.930 20:54:21 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:03.930 20:54:21 -- common/autotest_common.sh@1326 -- # break 00:37:03.930 20:54:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:03.930 20:54:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.930 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:03.930 ... 00:37:03.930 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:03.930 ... 00:37:03.930 fio-3.35 00:37:03.930 Starting 4 threads 00:37:03.930 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.930 [2024-04-26 20:54:22.235378] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:03.930 [2024-04-26 20:54:22.235453] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:09.197 00:37:09.197 filename0: (groupid=0, jobs=1): err= 0: pid=3810128: Fri Apr 26 20:54:27 2024 00:37:09.197 read: IOPS=2701, BW=21.1MiB/s (22.1MB/s)(106MiB/5004msec) 00:37:09.197 slat (nsec): min=5964, max=72221, avg=8558.24, stdev=4127.62 00:37:09.197 clat (usec): min=1106, max=6094, avg=2938.08, stdev=562.30 00:37:09.197 lat (usec): min=1117, max=6104, avg=2946.64, stdev=562.66 00:37:09.197 clat percentiles (usec): 00:37:09.197 | 1.00th=[ 1860], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2507], 00:37:09.197 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2999], 00:37:09.197 | 70.00th=[ 3163], 80.00th=[ 3359], 90.00th=[ 3621], 95.00th=[ 3916], 00:37:09.197 | 99.00th=[ 4752], 99.50th=[ 5080], 99.90th=[ 5604], 99.95th=[ 5669], 00:37:09.197 | 99.99th=[ 5932] 00:37:09.197 bw ( KiB/s): min=20256, max=23504, per=25.59%, avg=21616.00, stdev=1167.55, samples=10 00:37:09.197 iops : min= 2532, max= 2938, avg=2702.00, stdev=145.94, samples=10 00:37:09.197 lat (msec) : 2=2.34%, 4=93.66%, 10=4.00% 00:37:09.197 cpu : usr=97.26%, sys=2.42%, ctx=8, majf=0, minf=1635 00:37:09.197 IO depths : 1=0.1%, 2=2.4%, 4=68.5%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 issued rwts: total=13518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.197 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:09.197 filename0: (groupid=0, jobs=1): err= 0: pid=3810129: Fri Apr 26 20:54:27 2024 00:37:09.197 read: IOPS=2625, BW=20.5MiB/s (21.5MB/s)(103MiB/5002msec) 00:37:09.197 slat (nsec): min=5966, max=72226, avg=8133.26, stdev=3850.60 00:37:09.197 clat (usec): min=1408, max=11722, avg=3024.58, stdev=655.26 00:37:09.197 lat (usec): min=1416, max=11749, avg=3032.72, stdev=655.27 00:37:09.197 clat percentiles (usec): 00:37:09.197 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2540], 00:37:09.197 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3032], 00:37:09.197 | 70.00th=[ 3228], 80.00th=[ 3458], 90.00th=[ 3818], 95.00th=[ 4228], 00:37:09.197 | 99.00th=[ 5145], 99.50th=[ 5407], 99.90th=[ 6128], 99.95th=[11469], 00:37:09.197 | 99.99th=[11600] 00:37:09.197 bw ( KiB/s): min=19280, max=22912, per=24.87%, avg=21008.70, stdev=1351.72, samples=10 00:37:09.197 iops : min= 2410, max= 2864, avg=2626.00, stdev=169.05, samples=10 00:37:09.197 lat (msec) : 2=1.39%, 4=91.54%, 10=7.01%, 20=0.06% 00:37:09.197 cpu : usr=97.48%, sys=2.20%, ctx=8, majf=0, minf=1637 00:37:09.197 IO depths : 1=0.1%, 2=1.4%, 4=70.2%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 issued rwts: total=13135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.197 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:09.197 filename1: (groupid=0, jobs=1): err= 0: pid=3810130: Fri Apr 26 20:54:27 2024 00:37:09.197 read: IOPS=2524, BW=19.7MiB/s (20.7MB/s)(98.7MiB/5003msec) 00:37:09.197 slat (nsec): min=5073, max=69046, avg=7454.51, stdev=3246.62 00:37:09.197 clat (usec): min=722, max=51699, avg=3148.76, stdev=1356.01 00:37:09.197 lat (usec): min=730, max=51725, avg=3156.22, stdev=1356.08 00:37:09.197 clat percentiles (usec): 00:37:09.197 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:37:09.197 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2999], 60.00th=[ 3163], 00:37:09.197 | 70.00th=[ 3326], 80.00th=[ 3589], 90.00th=[ 3916], 95.00th=[ 4228], 00:37:09.197 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5866], 99.95th=[51643], 00:37:09.197 | 99.99th=[51643] 00:37:09.197 bw ( KiB/s): min=18148, max=22208, per=23.91%, avg=20198.80, stdev=1384.33, samples=10 00:37:09.197 iops : min= 2268, max= 2776, avg=2524.80, stdev=173.12, samples=10 00:37:09.197 lat (usec) : 750=0.02%, 1000=0.02% 00:37:09.197 lat (msec) : 2=0.36%, 4=90.98%, 10=8.56%, 100=0.06% 00:37:09.197 cpu : usr=97.82%, sys=1.86%, ctx=8, majf=0, minf=1635 00:37:09.197 IO depths : 1=0.1%, 2=1.1%, 4=70.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 issued rwts: total=12630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.197 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:09.197 filename1: (groupid=0, jobs=1): err= 0: pid=3810131: Fri Apr 26 20:54:27 2024 00:37:09.197 read: IOPS=2707, BW=21.2MiB/s (22.2MB/s)(106MiB/5003msec) 00:37:09.197 slat (nsec): min=5977, max=65208, avg=8862.08, stdev=4390.46 00:37:09.197 clat (usec): min=996, max=5854, avg=2930.79, stdev=575.63 00:37:09.197 lat (usec): min=1008, max=5878, avg=2939.65, stdev=575.76 00:37:09.197 clat percentiles (usec): 00:37:09.197 | 1.00th=[ 1844], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2507], 00:37:09.197 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2966], 00:37:09.197 | 70.00th=[ 3097], 80.00th=[ 3326], 90.00th=[ 3687], 95.00th=[ 3982], 00:37:09.197 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5538], 99.95th=[ 5669], 00:37:09.197 | 99.99th=[ 5866] 00:37:09.197 bw ( KiB/s): min=19744, max=23584, per=25.65%, avg=21664.00, stdev=1219.41, samples=10 00:37:09.197 iops : min= 2468, max= 2948, avg=2708.00, stdev=152.43, samples=10 00:37:09.197 lat (usec) : 1000=0.01% 00:37:09.197 lat (msec) : 2=2.20%, 4=92.82%, 10=4.97% 00:37:09.197 cpu : usr=97.56%, sys=2.12%, ctx=8, majf=0, minf=1633 00:37:09.197 IO depths : 1=0.1%, 2=1.6%, 4=69.2%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 issued rwts: total=13546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.197 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:09.197 00:37:09.197 Run status group 0 (all jobs): 00:37:09.197 READ: bw=82.5MiB/s (86.5MB/s), 19.7MiB/s-21.2MiB/s (20.7MB/s-22.2MB/s), io=413MiB (433MB), run=5002-5004msec 00:37:09.766 ----------------------------------------------------- 00:37:09.766 Suppressions used: 00:37:09.766 count bytes template 00:37:09.766 6 52 /usr/src/fio/parse.c 00:37:09.766 1 8 libtcmalloc_minimal.so 00:37:09.766 1 904 libcrypto.so 00:37:09.766 ----------------------------------------------------- 00:37:09.766 00:37:09.766 20:54:28 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:09.766 20:54:28 -- target/dif.sh@43 -- # local sub 00:37:09.766 20:54:28 -- target/dif.sh@45 -- # for sub in "$@" 00:37:09.766 20:54:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:09.766 20:54:28 -- target/dif.sh@36 -- # local sub_id=0 00:37:09.766 20:54:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:09.766 20:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:09.766 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.024 20:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.024 20:54:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:10.024 20:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.024 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.024 20:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.024 20:54:28 -- target/dif.sh@45 -- # for sub in "$@" 00:37:10.024 20:54:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:10.024 20:54:28 -- target/dif.sh@36 -- # local sub_id=1 00:37:10.024 20:54:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:10.024 20:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.024 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.024 20:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.024 20:54:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:10.024 20:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.024 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.024 20:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.024 00:37:10.024 real 0m26.464s 00:37:10.024 user 5m18.910s 00:37:10.024 sys 0m3.765s 00:37:10.024 20:54:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:10.024 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.024 ************************************ 00:37:10.024 END TEST fio_dif_rand_params 00:37:10.024 ************************************ 00:37:10.024 20:54:28 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:10.024 20:54:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:10.024 20:54:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:10.024 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.024 ************************************ 00:37:10.024 START TEST fio_dif_digest 00:37:10.024 ************************************ 00:37:10.024 20:54:28 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:37:10.024 20:54:28 -- target/dif.sh@123 -- # local NULL_DIF 00:37:10.024 20:54:28 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:10.025 20:54:28 -- target/dif.sh@125 -- # local hdgst ddgst 00:37:10.025 20:54:28 -- target/dif.sh@127 -- # NULL_DIF=3 00:37:10.025 20:54:28 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:10.025 20:54:28 -- target/dif.sh@127 -- # numjobs=3 00:37:10.025 20:54:28 -- target/dif.sh@127 -- # iodepth=3 00:37:10.025 20:54:28 -- target/dif.sh@127 -- # runtime=10 00:37:10.025 20:54:28 -- target/dif.sh@128 -- # hdgst=true 00:37:10.025 20:54:28 -- target/dif.sh@128 -- # ddgst=true 00:37:10.025 20:54:28 -- target/dif.sh@130 -- # create_subsystems 0 00:37:10.025 20:54:28 -- target/dif.sh@28 -- # local sub 00:37:10.025 20:54:28 -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.025 20:54:28 -- target/dif.sh@31 -- # create_subsystem 0 00:37:10.025 20:54:28 -- target/dif.sh@18 -- # local sub_id=0 00:37:10.025 20:54:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:10.025 20:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.025 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.025 bdev_null0 00:37:10.025 20:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.025 20:54:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:10.025 20:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.025 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.025 20:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.025 20:54:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:10.025 20:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.025 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.025 20:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.025 20:54:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:10.025 20:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.025 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:37:10.025 [2024-04-26 20:54:28.209509] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.025 20:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.025 20:54:28 -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:10.025 20:54:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.025 20:54:28 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.025 20:54:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:37:10.025 20:54:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:10.025 20:54:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:37:10.025 20:54:28 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:10.025 20:54:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.025 20:54:28 -- common/autotest_common.sh@1320 -- # shift 00:37:10.025 20:54:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:37:10.025 20:54:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:10.025 20:54:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.025 20:54:28 -- nvmf/common.sh@520 -- # config=() 00:37:10.025 20:54:28 -- nvmf/common.sh@520 -- # local subsystem config 00:37:10.025 20:54:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:10.025 20:54:28 -- target/dif.sh@82 -- # gen_fio_conf 00:37:10.025 20:54:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:10.025 { 00:37:10.025 "params": { 00:37:10.025 "name": "Nvme$subsystem", 00:37:10.025 "trtype": "$TEST_TRANSPORT", 00:37:10.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.025 "adrfam": "ipv4", 00:37:10.025 "trsvcid": "$NVMF_PORT", 00:37:10.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.025 "hdgst": ${hdgst:-false}, 00:37:10.025 "ddgst": ${ddgst:-false} 00:37:10.025 }, 00:37:10.025 "method": "bdev_nvme_attach_controller" 00:37:10.025 } 00:37:10.025 EOF 00:37:10.025 )") 00:37:10.025 20:54:28 -- target/dif.sh@54 -- # local file 00:37:10.025 20:54:28 -- target/dif.sh@56 -- # cat 00:37:10.025 20:54:28 -- nvmf/common.sh@542 -- # cat 00:37:10.025 20:54:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.025 20:54:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:37:10.025 20:54:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:10.025 20:54:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:37:10.025 20:54:28 -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.025 20:54:28 -- nvmf/common.sh@544 -- # jq . 00:37:10.025 20:54:28 -- nvmf/common.sh@545 -- # IFS=, 00:37:10.025 20:54:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:10.025 "params": { 00:37:10.025 "name": "Nvme0", 00:37:10.025 "trtype": "tcp", 00:37:10.025 "traddr": "10.0.0.2", 00:37:10.025 "adrfam": "ipv4", 00:37:10.025 "trsvcid": "4420", 00:37:10.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:10.025 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:10.025 "hdgst": true, 00:37:10.025 "ddgst": true 00:37:10.025 }, 00:37:10.025 "method": "bdev_nvme_attach_controller" 00:37:10.025 }' 00:37:10.025 20:54:28 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:10.025 20:54:28 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:10.025 20:54:28 -- common/autotest_common.sh@1326 -- # break 00:37:10.025 20:54:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:10.025 20:54:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.283 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:10.283 ... 00:37:10.283 fio-3.35 00:37:10.283 Starting 3 threads 00:37:10.541 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.110 [2024-04-26 20:54:29.166604] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:11.110 [2024-04-26 20:54:29.166674] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:21.082 00:37:21.082 filename0: (groupid=0, jobs=1): err= 0: pid=3811784: Fri Apr 26 20:54:39 2024 00:37:21.082 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10047msec) 00:37:21.082 slat (nsec): min=6254, max=39717, avg=9538.55, stdev=2907.36 00:37:21.082 clat (usec): min=7636, max=52693, avg=10292.66, stdev=1322.98 00:37:21.082 lat (usec): min=7649, max=52702, avg=10302.20, stdev=1322.98 00:37:21.082 clat percentiles (usec): 00:37:21.082 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:37:21.082 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:37:21.082 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:37:21.082 | 99.00th=[12649], 99.50th=[13304], 99.90th=[15664], 99.95th=[50070], 00:37:21.082 | 99.99th=[52691] 00:37:21.082 bw ( KiB/s): min=36352, max=38656, per=33.28%, avg=37363.20, stdev=666.92, samples=20 00:37:21.082 iops : min= 284, max= 302, avg=291.90, stdev= 5.21, samples=20 00:37:21.082 lat (msec) : 10=37.83%, 20=62.10%, 100=0.07% 00:37:21.082 cpu : usr=98.02%, sys=1.71%, ctx=15, majf=0, minf=1638 00:37:21.082 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.082 issued rwts: total=2921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.082 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:21.082 filename0: (groupid=0, jobs=1): err= 0: pid=3811785: Fri Apr 26 20:54:39 2024 00:37:21.082 read: IOPS=292, BW=36.6MiB/s (38.4MB/s)(368MiB/10046msec) 00:37:21.082 slat (nsec): min=4901, max=77749, avg=12159.15, stdev=4095.56 00:37:21.082 clat (usec): min=7954, max=47563, avg=10217.96, stdev=1208.44 00:37:21.082 lat (usec): min=7964, max=47579, avg=10230.12, stdev=1208.37 00:37:21.082 clat percentiles (usec): 00:37:21.082 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9634], 00:37:21.082 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:37:21.082 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:37:21.082 | 99.00th=[12256], 99.50th=[12780], 99.90th=[15270], 99.95th=[45876], 00:37:21.082 | 99.99th=[47449] 00:37:21.082 bw ( KiB/s): min=36608, max=38912, per=33.51%, avg=37619.20, stdev=661.73, samples=20 00:37:21.082 iops : min= 286, max= 304, avg=293.90, stdev= 5.17, samples=20 00:37:21.082 lat (msec) : 10=40.50%, 20=59.44%, 50=0.07% 00:37:21.082 cpu : usr=97.34%, sys=2.22%, ctx=15, majf=0, minf=1634 00:37:21.082 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.082 issued rwts: total=2941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.082 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:21.082 filename0: (groupid=0, jobs=1): err= 0: pid=3811786: Fri Apr 26 20:54:39 2024 00:37:21.082 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(369MiB/10045msec) 00:37:21.082 slat (nsec): min=6310, max=96368, avg=11719.49, stdev=3821.79 00:37:21.082 clat (usec): min=7588, max=50851, avg=10190.71, stdev=1304.04 00:37:21.082 lat (usec): min=7598, max=50863, avg=10202.43, stdev=1304.47 00:37:21.082 clat percentiles (usec): 00:37:21.082 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:37:21.082 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:37:21.082 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11600], 00:37:21.082 | 99.00th=[12649], 99.50th=[13566], 99.90th=[17695], 99.95th=[46924], 00:37:21.082 | 99.99th=[50594] 00:37:21.082 bw ( KiB/s): min=33792, max=38912, per=33.60%, avg=37721.60, stdev=1405.30, samples=20 00:37:21.082 iops : min= 264, max= 304, avg=294.70, stdev=10.98, samples=20 00:37:21.082 lat (msec) : 10=43.71%, 20=56.22%, 50=0.03%, 100=0.03% 00:37:21.082 cpu : usr=96.72%, sys=2.98%, ctx=18, majf=0, minf=1633 00:37:21.082 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.083 issued rwts: total=2949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.083 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:21.083 00:37:21.083 Run status group 0 (all jobs): 00:37:21.083 READ: bw=110MiB/s (115MB/s), 36.3MiB/s-36.7MiB/s (38.1MB/s-38.5MB/s), io=1101MiB (1155MB), run=10045-10047msec 00:37:21.653 ----------------------------------------------------- 00:37:21.653 Suppressions used: 00:37:21.653 count bytes template 00:37:21.653 5 44 /usr/src/fio/parse.c 00:37:21.653 1 8 libtcmalloc_minimal.so 00:37:21.653 1 904 libcrypto.so 00:37:21.653 ----------------------------------------------------- 00:37:21.653 00:37:21.653 20:54:39 -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:21.653 20:54:39 -- target/dif.sh@43 -- # local sub 00:37:21.653 20:54:39 -- target/dif.sh@45 -- # for sub in "$@" 00:37:21.653 20:54:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:21.653 20:54:39 -- target/dif.sh@36 -- # local sub_id=0 00:37:21.653 20:54:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:21.653 20:54:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:21.653 20:54:39 -- common/autotest_common.sh@10 -- # set +x 00:37:21.653 20:54:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:21.653 20:54:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:21.653 20:54:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:21.653 20:54:39 -- common/autotest_common.sh@10 -- # set +x 00:37:21.653 20:54:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:21.653 00:37:21.653 real 0m11.614s 00:37:21.653 user 0m46.050s 00:37:21.653 sys 0m1.035s 00:37:21.653 20:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:21.653 20:54:39 -- common/autotest_common.sh@10 -- # set +x 00:37:21.653 ************************************ 00:37:21.653 END TEST fio_dif_digest 00:37:21.653 ************************************ 00:37:21.653 20:54:39 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:21.653 20:54:39 -- target/dif.sh@147 -- # nvmftestfini 00:37:21.653 20:54:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:21.653 20:54:39 -- nvmf/common.sh@116 -- # sync 00:37:21.653 20:54:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:21.653 20:54:39 -- nvmf/common.sh@119 -- # set +e 00:37:21.653 20:54:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:21.653 20:54:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:21.653 rmmod nvme_tcp 00:37:21.653 rmmod nvme_fabrics 00:37:21.653 rmmod nvme_keyring 00:37:21.653 20:54:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:21.653 20:54:39 -- nvmf/common.sh@123 -- # set -e 00:37:21.653 20:54:39 -- nvmf/common.sh@124 -- # return 0 00:37:21.653 20:54:39 -- nvmf/common.sh@477 -- # '[' -n 3799779 ']' 00:37:21.653 20:54:39 -- nvmf/common.sh@478 -- # killprocess 3799779 00:37:21.653 20:54:39 -- common/autotest_common.sh@926 -- # '[' -z 3799779 ']' 00:37:21.653 20:54:39 -- common/autotest_common.sh@930 -- # kill -0 3799779 00:37:21.653 20:54:39 -- common/autotest_common.sh@931 -- # uname 00:37:21.653 20:54:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:21.653 20:54:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3799779 00:37:21.653 20:54:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:21.653 20:54:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:21.653 20:54:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3799779' 00:37:21.653 killing process with pid 3799779 00:37:21.653 20:54:39 -- common/autotest_common.sh@945 -- # kill 3799779 00:37:21.653 20:54:39 -- common/autotest_common.sh@950 -- # wait 3799779 00:37:22.224 20:54:40 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:37:22.224 20:54:40 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:37:24.760 Waiting for block devices as requested 00:37:24.760 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:37:24.760 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:25.018 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:25.018 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:25.018 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:37:25.018 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:25.275 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:37:25.275 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:25.275 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:37:25.275 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:25.533 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:37:25.533 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:37:25.533 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:37:25.533 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:37:25.791 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:25.791 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:37:25.791 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:25.791 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:37:26.051 20:54:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:26.051 20:54:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:26.051 20:54:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:26.051 20:54:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:26.051 20:54:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.051 20:54:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:26.051 20:54:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:27.962 20:54:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:37:27.962 00:37:27.962 real 1m17.888s 00:37:27.962 user 8m15.951s 00:37:27.962 sys 0m16.266s 00:37:27.962 20:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:27.962 20:54:46 -- common/autotest_common.sh@10 -- # set +x 00:37:27.962 ************************************ 00:37:27.962 END TEST nvmf_dif 00:37:27.962 ************************************ 00:37:28.220 20:54:46 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:28.220 20:54:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:28.220 20:54:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:28.220 20:54:46 -- common/autotest_common.sh@10 -- # set +x 00:37:28.220 ************************************ 00:37:28.220 START TEST nvmf_abort_qd_sizes 00:37:28.220 ************************************ 00:37:28.220 20:54:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:28.220 * Looking for test storage... 00:37:28.220 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:37:28.220 20:54:46 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:37:28.220 20:54:46 -- nvmf/common.sh@7 -- # uname -s 00:37:28.220 20:54:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:28.220 20:54:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:28.220 20:54:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:28.220 20:54:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:28.220 20:54:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:28.220 20:54:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:28.220 20:54:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:28.220 20:54:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:28.220 20:54:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:28.220 20:54:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:28.220 20:54:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:37:28.220 20:54:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:37:28.220 20:54:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:28.220 20:54:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:28.220 20:54:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:37:28.220 20:54:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:37:28.220 20:54:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.220 20:54:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.220 20:54:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.220 20:54:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.220 20:54:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.220 20:54:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.220 20:54:46 -- paths/export.sh@5 -- # export PATH 00:37:28.220 20:54:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.220 20:54:46 -- nvmf/common.sh@46 -- # : 0 00:37:28.220 20:54:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:28.220 20:54:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:28.220 20:54:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:28.220 20:54:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:28.220 20:54:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:28.220 20:54:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:28.220 20:54:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:28.220 20:54:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:28.220 20:54:46 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:37:28.220 20:54:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:28.220 20:54:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:28.220 20:54:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:28.220 20:54:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:28.220 20:54:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:28.220 20:54:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.220 20:54:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:28.220 20:54:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.220 20:54:46 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:37:28.220 20:54:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:37:28.220 20:54:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:37:28.220 20:54:46 -- common/autotest_common.sh@10 -- # set +x 00:37:33.493 20:54:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:37:33.493 20:54:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:37:33.493 20:54:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:37:33.493 20:54:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:37:33.493 20:54:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:37:33.493 20:54:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:37:33.493 20:54:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:37:33.493 20:54:51 -- nvmf/common.sh@294 -- # net_devs=() 00:37:33.493 20:54:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:37:33.493 20:54:51 -- nvmf/common.sh@295 -- # e810=() 00:37:33.493 20:54:51 -- nvmf/common.sh@295 -- # local -ga e810 00:37:33.493 20:54:51 -- nvmf/common.sh@296 -- # x722=() 00:37:33.493 20:54:51 -- nvmf/common.sh@296 -- # local -ga x722 00:37:33.493 20:54:51 -- nvmf/common.sh@297 -- # mlx=() 00:37:33.493 20:54:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:37:33.493 20:54:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.493 20:54:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:37:33.493 20:54:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:37:33.493 20:54:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:37:33.493 20:54:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:37:33.493 Found 0000:27:00.0 (0x8086 - 0x159b) 00:37:33.493 20:54:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:37:33.493 20:54:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:37:33.493 Found 0000:27:00.1 (0x8086 - 0x159b) 00:37:33.493 20:54:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:37:33.493 20:54:51 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:37:33.493 20:54:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.493 20:54:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:37:33.493 20:54:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.493 20:54:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:37:33.493 Found net devices under 0000:27:00.0: cvl_0_0 00:37:33.493 20:54:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.493 20:54:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:37:33.493 20:54:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.493 20:54:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:37:33.493 20:54:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.493 20:54:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:37:33.493 Found net devices under 0000:27:00.1: cvl_0_1 00:37:33.493 20:54:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.493 20:54:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:37:33.493 20:54:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:37:33.493 20:54:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:37:33.493 20:54:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:37:33.493 20:54:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.493 20:54:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.493 20:54:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.493 20:54:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:37:33.493 20:54:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.493 20:54:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.493 20:54:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:37:33.493 20:54:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.493 20:54:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.493 20:54:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:37:33.493 20:54:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:37:33.493 20:54:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.493 20:54:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.493 20:54:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.493 20:54:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.493 20:54:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:37:33.493 20:54:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.493 20:54:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.493 20:54:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.493 20:54:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:37:33.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:37:33.493 00:37:33.493 --- 10.0.0.2 ping statistics --- 00:37:33.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.493 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:37:33.493 20:54:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:37:33.493 00:37:33.493 --- 10.0.0.1 ping statistics --- 00:37:33.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.493 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:37:33.493 20:54:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.493 20:54:51 -- nvmf/common.sh@410 -- # return 0 00:37:33.493 20:54:51 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:37:33.493 20:54:51 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:37:36.031 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:36.031 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:36.031 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:36.031 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:37:36.031 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:36.031 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:37:36.031 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:36.032 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:37:36.032 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:36.032 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:37:36.292 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:37:36.292 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:37:36.292 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:36.292 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:37:36.292 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:36.292 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:37:38.196 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:37:38.196 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:37:38.457 20:54:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:38.457 20:54:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:38.457 20:54:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:38.457 20:54:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:38.457 20:54:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:38.457 20:54:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:38.457 20:54:56 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:37:38.457 20:54:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:38.457 20:54:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:38.457 20:54:56 -- common/autotest_common.sh@10 -- # set +x 00:37:38.457 20:54:56 -- nvmf/common.sh@469 -- # nvmfpid=3821135 00:37:38.457 20:54:56 -- nvmf/common.sh@470 -- # waitforlisten 3821135 00:37:38.457 20:54:56 -- common/autotest_common.sh@819 -- # '[' -z 3821135 ']' 00:37:38.457 20:54:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.457 20:54:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:38.457 20:54:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.457 20:54:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:38.457 20:54:56 -- common/autotest_common.sh@10 -- # set +x 00:37:38.457 20:54:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:38.457 [2024-04-26 20:54:56.660227] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:38.457 [2024-04-26 20:54:56.660336] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:38.457 EAL: No free 2048 kB hugepages reported on node 1 00:37:38.457 [2024-04-26 20:54:56.787272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:38.718 [2024-04-26 20:54:56.899869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:38.718 [2024-04-26 20:54:56.900062] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:38.718 [2024-04-26 20:54:56.900078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:38.718 [2024-04-26 20:54:56.900088] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:38.718 [2024-04-26 20:54:56.900172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.718 [2024-04-26 20:54:56.900307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:38.718 [2024-04-26 20:54:56.900428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.718 [2024-04-26 20:54:56.900435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:39.285 20:54:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:39.285 20:54:57 -- common/autotest_common.sh@852 -- # return 0 00:37:39.285 20:54:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:39.285 20:54:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:39.285 20:54:57 -- common/autotest_common.sh@10 -- # set +x 00:37:39.285 20:54:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:37:39.285 20:54:57 -- scripts/common.sh@311 -- # local bdf bdfs 00:37:39.285 20:54:57 -- scripts/common.sh@312 -- # local nvmes 00:37:39.285 20:54:57 -- scripts/common.sh@314 -- # [[ -n 0000:c9:00.0 0000:ca:00.0 ]] 00:37:39.285 20:54:57 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:39.285 20:54:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:37:39.285 20:54:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:37:39.285 20:54:57 -- scripts/common.sh@322 -- # uname -s 00:37:39.285 20:54:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:37:39.285 20:54:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:37:39.285 20:54:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:37:39.285 20:54:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:ca:00.0 ]] 00:37:39.285 20:54:57 -- scripts/common.sh@322 -- # uname -s 00:37:39.285 20:54:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:37:39.285 20:54:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:37:39.285 20:54:57 -- scripts/common.sh@327 -- # (( 2 )) 00:37:39.285 20:54:57 -- scripts/common.sh@328 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:c9:00.0 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:37:39.285 20:54:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:39.285 20:54:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:39.285 20:54:57 -- common/autotest_common.sh@10 -- # set +x 00:37:39.285 ************************************ 00:37:39.285 START TEST spdk_target_abort 00:37:39.285 ************************************ 00:37:39.285 20:54:57 -- common/autotest_common.sh@1104 -- # spdk_target 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:37:39.285 20:54:57 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:c9:00.0 -b spdk_target 00:37:39.285 20:54:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:39.285 20:54:57 -- common/autotest_common.sh@10 -- # set +x 00:37:42.578 spdk_targetn1 00:37:42.578 20:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:42.578 20:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:42.578 20:55:00 -- common/autotest_common.sh@10 -- # set +x 00:37:42.578 [2024-04-26 20:55:00.263167] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.578 20:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:37:42.578 20:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:42.578 20:55:00 -- common/autotest_common.sh@10 -- # set +x 00:37:42.578 20:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:37:42.578 20:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:42.578 20:55:00 -- common/autotest_common.sh@10 -- # set +x 00:37:42.578 20:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:37:42.578 20:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:42.578 20:55:00 -- common/autotest_common.sh@10 -- # set +x 00:37:42.578 [2024-04-26 20:55:00.297159] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.578 20:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:42.578 20:55:00 -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:42.579 20:55:00 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:42.579 EAL: No free 2048 kB hugepages reported on node 1 00:37:45.220 Initializing NVMe Controllers 00:37:45.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:37:45.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:37:45.220 Initialization complete. Launching workers. 00:37:45.220 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 14821, failed: 0 00:37:45.220 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1985, failed to submit 12836 00:37:45.220 success 732, unsuccess 1253, failed 0 00:37:45.220 20:55:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:45.220 20:55:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:45.480 EAL: No free 2048 kB hugepages reported on node 1 00:37:48.768 Initializing NVMe Controllers 00:37:48.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:37:48.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:37:48.768 Initialization complete. Launching workers. 00:37:48.768 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8667, failed: 0 00:37:48.768 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1201, failed to submit 7466 00:37:48.768 success 383, unsuccess 818, failed 0 00:37:48.768 20:55:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:48.768 20:55:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:48.768 EAL: No free 2048 kB hugepages reported on node 1 00:37:52.057 Initializing NVMe Controllers 00:37:52.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:37:52.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:37:52.057 Initialization complete. Launching workers. 00:37:52.057 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 40906, failed: 0 00:37:52.057 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2694, failed to submit 38212 00:37:52.057 success 590, unsuccess 2104, failed 0 00:37:52.057 20:55:10 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:37:52.057 20:55:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.057 20:55:10 -- common/autotest_common.sh@10 -- # set +x 00:37:52.057 20:55:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.057 20:55:10 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:52.057 20:55:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.057 20:55:10 -- common/autotest_common.sh@10 -- # set +x 00:37:54.596 20:55:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:54.597 20:55:12 -- target/abort_qd_sizes.sh@62 -- # killprocess 3821135 00:37:54.597 20:55:12 -- common/autotest_common.sh@926 -- # '[' -z 3821135 ']' 00:37:54.597 20:55:12 -- common/autotest_common.sh@930 -- # kill -0 3821135 00:37:54.597 20:55:12 -- common/autotest_common.sh@931 -- # uname 00:37:54.597 20:55:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:54.597 20:55:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3821135 00:37:54.597 20:55:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:54.597 20:55:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:54.597 20:55:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3821135' 00:37:54.597 killing process with pid 3821135 00:37:54.597 20:55:12 -- common/autotest_common.sh@945 -- # kill 3821135 00:37:54.597 20:55:12 -- common/autotest_common.sh@950 -- # wait 3821135 00:37:54.855 00:37:54.855 real 0m15.571s 00:37:54.855 user 1m2.118s 00:37:54.855 sys 0m1.271s 00:37:54.855 20:55:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:54.855 20:55:12 -- common/autotest_common.sh@10 -- # set +x 00:37:54.855 ************************************ 00:37:54.855 END TEST spdk_target_abort 00:37:54.855 ************************************ 00:37:54.855 20:55:13 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:37:54.855 20:55:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:54.855 20:55:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:54.855 20:55:13 -- common/autotest_common.sh@10 -- # set +x 00:37:54.855 ************************************ 00:37:54.855 START TEST kernel_target_abort 00:37:54.855 ************************************ 00:37:54.855 20:55:13 -- common/autotest_common.sh@1104 -- # kernel_target 00:37:54.855 20:55:13 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:37:54.855 20:55:13 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:37:54.855 20:55:13 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:37:54.855 20:55:13 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:37:54.855 20:55:13 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:37:54.855 20:55:13 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:37:54.855 20:55:13 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:54.855 20:55:13 -- nvmf/common.sh@627 -- # local block nvme 00:37:54.855 20:55:13 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:37:54.855 20:55:13 -- nvmf/common.sh@630 -- # modprobe nvmet 00:37:54.855 20:55:13 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:54.855 20:55:13 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:37:57.404 Waiting for block devices as requested 00:37:57.404 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:37:57.404 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:57.404 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:57.405 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:57.405 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:37:57.665 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:57.665 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:37:57.665 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:57.665 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:37:57.925 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:57.925 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:37:57.925 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:37:57.925 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:37:58.185 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:37:58.185 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:58.185 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:37:58.443 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:58.443 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:37:59.009 20:55:17 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:37:59.009 20:55:17 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:59.009 20:55:17 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:37:59.009 20:55:17 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:37:59.009 20:55:17 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:59.268 No valid GPT data, bailing 00:37:59.268 20:55:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:59.268 20:55:17 -- scripts/common.sh@393 -- # pt= 00:37:59.268 20:55:17 -- scripts/common.sh@394 -- # return 1 00:37:59.268 20:55:17 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:37:59.268 20:55:17 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:37:59.268 20:55:17 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:37:59.268 20:55:17 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:37:59.268 20:55:17 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:37:59.268 20:55:17 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:37:59.268 No valid GPT data, bailing 00:37:59.268 20:55:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:37:59.268 20:55:17 -- scripts/common.sh@393 -- # pt= 00:37:59.268 20:55:17 -- scripts/common.sh@394 -- # return 1 00:37:59.268 20:55:17 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:37:59.268 20:55:17 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n1 ]] 00:37:59.268 20:55:17 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:37:59.268 20:55:17 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:37:59.268 20:55:17 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:59.268 20:55:17 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:37:59.268 20:55:17 -- nvmf/common.sh@654 -- # echo 1 00:37:59.268 20:55:17 -- nvmf/common.sh@655 -- # echo /dev/nvme1n1 00:37:59.268 20:55:17 -- nvmf/common.sh@656 -- # echo 1 00:37:59.268 20:55:17 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:37:59.268 20:55:17 -- nvmf/common.sh@663 -- # echo tcp 00:37:59.268 20:55:17 -- nvmf/common.sh@664 -- # echo 4420 00:37:59.268 20:55:17 -- nvmf/common.sh@665 -- # echo ipv4 00:37:59.268 20:55:17 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:59.268 20:55:17 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.1 -t tcp -s 4420 00:37:59.268 00:37:59.268 Discovery Log Number of Records 2, Generation counter 2 00:37:59.268 =====Discovery Log Entry 0====== 00:37:59.268 trtype: tcp 00:37:59.268 adrfam: ipv4 00:37:59.268 subtype: current discovery subsystem 00:37:59.268 treq: not specified, sq flow control disable supported 00:37:59.268 portid: 1 00:37:59.268 trsvcid: 4420 00:37:59.268 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:59.268 traddr: 10.0.0.1 00:37:59.268 eflags: none 00:37:59.268 sectype: none 00:37:59.268 =====Discovery Log Entry 1====== 00:37:59.268 trtype: tcp 00:37:59.268 adrfam: ipv4 00:37:59.268 subtype: nvme subsystem 00:37:59.268 treq: not specified, sq flow control disable supported 00:37:59.268 portid: 1 00:37:59.268 trsvcid: 4420 00:37:59.268 subnqn: kernel_target 00:37:59.268 traddr: 10.0.0.1 00:37:59.268 eflags: none 00:37:59.268 sectype: none 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:59.268 20:55:17 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:37:59.268 EAL: No free 2048 kB hugepages reported on node 1 00:38:02.555 Initializing NVMe Controllers 00:38:02.555 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:38:02.555 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:38:02.555 Initialization complete. Launching workers. 00:38:02.555 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 52863, failed: 0 00:38:02.555 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 52863, failed to submit 0 00:38:02.555 success 0, unsuccess 52863, failed 0 00:38:02.555 20:55:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:02.555 20:55:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:38:02.555 EAL: No free 2048 kB hugepages reported on node 1 00:38:05.843 Initializing NVMe Controllers 00:38:05.843 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:38:05.843 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:38:05.843 Initialization complete. Launching workers. 00:38:05.843 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 96034, failed: 0 00:38:05.843 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24238, failed to submit 71796 00:38:05.843 success 0, unsuccess 24238, failed 0 00:38:05.843 20:55:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:05.843 20:55:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:38:05.843 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.132 Initializing NVMe Controllers 00:38:09.132 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:38:09.132 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:38:09.132 Initialization complete. Launching workers. 00:38:09.132 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 91484, failed: 0 00:38:09.132 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22870, failed to submit 68614 00:38:09.132 success 0, unsuccess 22870, failed 0 00:38:09.132 20:55:26 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:38:09.132 20:55:26 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:38:09.132 20:55:26 -- nvmf/common.sh@677 -- # echo 0 00:38:09.132 20:55:26 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:38:09.132 20:55:26 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:38:09.132 20:55:26 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:09.132 20:55:26 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:38:09.132 20:55:26 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:38:09.132 20:55:26 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:38:09.132 00:38:09.132 real 0m13.811s 00:38:09.132 user 0m5.358s 00:38:09.132 sys 0m3.627s 00:38:09.132 20:55:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:09.132 20:55:26 -- common/autotest_common.sh@10 -- # set +x 00:38:09.132 ************************************ 00:38:09.132 END TEST kernel_target_abort 00:38:09.132 ************************************ 00:38:09.132 20:55:26 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:38:09.132 20:55:26 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:38:09.132 20:55:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:38:09.132 20:55:26 -- nvmf/common.sh@116 -- # sync 00:38:09.132 20:55:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:38:09.132 20:55:26 -- nvmf/common.sh@119 -- # set +e 00:38:09.132 20:55:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:38:09.132 20:55:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:38:09.132 rmmod nvme_tcp 00:38:09.132 rmmod nvme_fabrics 00:38:09.132 rmmod nvme_keyring 00:38:09.132 20:55:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:38:09.132 20:55:26 -- nvmf/common.sh@123 -- # set -e 00:38:09.132 20:55:26 -- nvmf/common.sh@124 -- # return 0 00:38:09.132 20:55:26 -- nvmf/common.sh@477 -- # '[' -n 3821135 ']' 00:38:09.132 20:55:26 -- nvmf/common.sh@478 -- # killprocess 3821135 00:38:09.132 20:55:26 -- common/autotest_common.sh@926 -- # '[' -z 3821135 ']' 00:38:09.132 20:55:26 -- common/autotest_common.sh@930 -- # kill -0 3821135 00:38:09.132 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3821135) - No such process 00:38:09.132 20:55:26 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3821135 is not found' 00:38:09.132 Process with pid 3821135 is not found 00:38:09.132 20:55:26 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:38:09.132 20:55:26 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:38:11.671 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:38:11.671 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:38:11.671 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:38:11.671 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:38:11.671 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:38:11.671 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:38:11.671 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:38:11.671 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:38:11.671 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:38:11.671 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:38:11.671 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:38:11.671 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:38:11.671 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:38:11.671 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:38:11.671 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:38:11.671 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:38:11.671 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:38:11.671 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:38:11.671 20:55:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:38:11.671 20:55:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:38:11.671 20:55:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:11.671 20:55:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:38:11.672 20:55:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.672 20:55:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:11.672 20:55:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.579 20:55:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:38:13.579 00:38:13.579 real 0m45.557s 00:38:13.579 user 1m11.220s 00:38:13.579 sys 0m12.301s 00:38:13.579 20:55:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:13.579 20:55:31 -- common/autotest_common.sh@10 -- # set +x 00:38:13.579 ************************************ 00:38:13.579 END TEST nvmf_abort_qd_sizes 00:38:13.579 ************************************ 00:38:13.579 20:55:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:13.579 20:55:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:13.579 20:55:31 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:13.579 20:55:31 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:13.579 20:55:31 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:13.839 20:55:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:13.839 20:55:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:13.839 20:55:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:13.839 20:55:31 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:13.839 20:55:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:13.839 20:55:31 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:13.839 20:55:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:13.839 20:55:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:13.839 20:55:31 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:13.839 20:55:31 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:38:13.840 20:55:31 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:38:13.840 20:55:31 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:38:13.840 20:55:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:13.840 20:55:31 -- common/autotest_common.sh@10 -- # set +x 00:38:13.840 20:55:31 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:38:13.840 20:55:31 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:38:13.840 20:55:31 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:38:13.840 20:55:31 -- common/autotest_common.sh@10 -- # set +x 00:38:19.115 INFO: APP EXITING 00:38:19.115 INFO: killing all VMs 00:38:19.115 INFO: killing vhost app 00:38:19.115 INFO: EXIT DONE 00:38:21.653 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:38:21.653 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:38:21.653 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:38:21.653 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:38:21.653 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:38:21.653 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:38:21.653 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:38:21.653 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:38:21.653 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:38:21.653 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:38:21.653 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:38:21.653 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:38:21.653 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:38:21.653 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:38:21.653 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:38:21.653 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:38:21.653 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:38:21.653 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:38:24.186 Cleaning 00:38:24.186 Removing: /var/run/dpdk/spdk0/config 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:24.186 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:24.186 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:24.186 Removing: /var/run/dpdk/spdk1/config 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:24.186 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:24.186 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:24.186 Removing: /var/run/dpdk/spdk2/config 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:24.186 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:24.186 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:24.186 Removing: /var/run/dpdk/spdk3/config 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:24.186 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:24.445 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:24.445 Removing: /var/run/dpdk/spdk4/config 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:24.445 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:24.445 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:24.445 Removing: /dev/shm/nvmf_trace.0 00:38:24.445 Removing: /dev/shm/spdk_tgt_trace.pid3315154 00:38:24.445 Removing: /var/run/dpdk/spdk0 00:38:24.445 Removing: /var/run/dpdk/spdk1 00:38:24.445 Removing: /var/run/dpdk/spdk2 00:38:24.445 Removing: /var/run/dpdk/spdk3 00:38:24.445 Removing: /var/run/dpdk/spdk4 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3309779 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3312029 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3315154 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3315882 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3318845 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3321098 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3321536 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3322101 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3322480 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3322948 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3323251 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3323552 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3323923 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3324796 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3328198 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3328662 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3329002 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3329048 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3329961 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3330239 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3330924 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3331216 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3331549 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3331772 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3332164 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3332208 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3333131 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3333446 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3333867 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3336254 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3337921 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3339768 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3341956 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3344243 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3346245 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3348181 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3349999 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3352125 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3353945 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3355890 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3357889 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3359728 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3361776 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3363674 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3365592 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3367612 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3369452 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3371550 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3373379 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3375461 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3377318 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3379670 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3381730 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3383616 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3385649 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3387553 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3389392 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3391491 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3393456 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3395426 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3397246 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3399360 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3401180 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3403296 00:38:24.445 Removing: /var/run/dpdk/spdk_pid3405118 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3407045 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3409053 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3410890 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3412986 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3414930 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3417189 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3419273 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3421462 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3423982 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3428295 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3523214 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3528282 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3538781 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3544838 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3549354 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3550016 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3555101 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3555444 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3560293 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3567652 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3570577 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3582753 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3593275 00:38:24.704 Removing: /var/run/dpdk/spdk_pid3595385 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3596517 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3616140 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3621161 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3626322 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3628155 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3630542 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3630767 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3630956 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3631190 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3632137 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3634263 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3635542 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3636189 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3642556 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3648967 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3654894 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3694616 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3699465 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3707980 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3708131 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3714009 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3714313 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3714616 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3715082 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3715277 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3718104 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3720006 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3722106 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3723919 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3726025 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3728092 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3734825 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3735557 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3738092 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3739425 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3747705 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3751016 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3757262 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3764405 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3771399 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3773542 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3775884 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3778090 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3780673 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3781472 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3782098 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3783003 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3784362 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3793790 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3793796 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3800109 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3802526 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3805074 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3807133 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3809762 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3811360 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3821940 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3822546 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3823152 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3826492 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3827055 00:38:24.705 Removing: /var/run/dpdk/spdk_pid3827630 00:38:24.705 Clean 00:38:24.963 killing process with pid 3257931 00:38:33.187 killing process with pid 3257928 00:38:33.187 killing process with pid 3257930 00:38:33.448 killing process with pid 3257929 00:38:33.448 20:55:51 -- common/autotest_common.sh@1436 -- # return 0 00:38:33.448 20:55:51 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:38:33.448 20:55:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:33.448 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:38:33.448 20:55:51 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:38:33.448 20:55:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:33.448 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:38:33.448 20:55:51 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:38:33.448 20:55:51 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:38:33.448 20:55:51 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:38:33.448 20:55:51 -- spdk/autotest.sh@394 -- # hash lcov 00:38:33.448 20:55:51 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:33.448 20:55:51 -- spdk/autotest.sh@396 -- # hostname 00:38:33.448 20:55:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-07 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:38:33.709 geninfo: WARNING: invalid characters removed from testname! 00:38:55.753 20:56:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:55.753 20:56:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:55.753 20:56:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:57.137 20:56:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:58.526 20:56:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:59.915 20:56:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:39:00.859 20:56:19 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:00.859 20:56:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:39:00.859 20:56:19 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:00.859 20:56:19 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.859 20:56:19 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.859 20:56:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.859 20:56:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.859 20:56:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.859 20:56:19 -- paths/export.sh@5 -- $ export PATH 00:39:00.859 20:56:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.859 20:56:19 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:39:00.859 20:56:19 -- common/autobuild_common.sh@435 -- $ date +%s 00:39:00.859 20:56:19 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714157779.XXXXXX 00:39:00.859 20:56:19 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714157779.JOeYSV 00:39:00.859 20:56:19 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:39:00.859 20:56:19 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:39:00.859 20:56:19 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:39:00.859 20:56:19 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:00.859 20:56:19 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:00.859 20:56:19 -- common/autobuild_common.sh@451 -- $ get_config_params 00:39:00.859 20:56:19 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:39:00.859 20:56:19 -- common/autotest_common.sh@10 -- $ set +x 00:39:00.859 20:56:19 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:39:00.859 20:56:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:39:00.859 20:56:19 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:39:00.859 20:56:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:00.859 20:56:19 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:00.859 20:56:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:00.859 20:56:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:00.859 20:56:19 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:00.859 20:56:19 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:00.859 20:56:19 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:39:01.121 20:56:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:01.121 + [[ -n 3215553 ]] 00:39:01.121 + sudo kill 3215553 00:39:01.131 [Pipeline] } 00:39:01.151 [Pipeline] // stage 00:39:01.158 [Pipeline] } 00:39:01.178 [Pipeline] // timeout 00:39:01.184 [Pipeline] } 00:39:01.204 [Pipeline] // catchError 00:39:01.209 [Pipeline] } 00:39:01.229 [Pipeline] // wrap 00:39:01.236 [Pipeline] } 00:39:01.252 [Pipeline] // catchError 00:39:01.262 [Pipeline] stage 00:39:01.265 [Pipeline] { (Epilogue) 00:39:01.280 [Pipeline] catchError 00:39:01.281 [Pipeline] { 00:39:01.292 [Pipeline] echo 00:39:01.294 Cleanup processes 00:39:01.298 [Pipeline] sh 00:39:01.584 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:39:01.584 3843157 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:39:01.599 [Pipeline] sh 00:39:01.885 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:39:01.885 ++ grep -v 'sudo pgrep' 00:39:01.885 ++ awk '{print $1}' 00:39:01.885 + sudo kill -9 00:39:01.885 + true 00:39:01.898 [Pipeline] sh 00:39:02.182 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:12.199 [Pipeline] sh 00:39:12.490 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:12.490 Artifacts sizes are good 00:39:12.504 [Pipeline] archiveArtifacts 00:39:12.512 Archiving artifacts 00:39:12.769 [Pipeline] sh 00:39:13.083 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:39:13.096 [Pipeline] cleanWs 00:39:13.107 [WS-CLEANUP] Deleting project workspace... 00:39:13.107 [WS-CLEANUP] Deferred wipeout is used... 00:39:13.113 [WS-CLEANUP] done 00:39:13.115 [Pipeline] } 00:39:13.135 [Pipeline] // catchError 00:39:13.145 [Pipeline] sh 00:39:13.429 + logger -p user.info -t JENKINS-CI 00:39:13.439 [Pipeline] } 00:39:13.453 [Pipeline] // stage 00:39:13.458 [Pipeline] } 00:39:13.475 [Pipeline] // node 00:39:13.479 [Pipeline] End of Pipeline 00:39:13.520 Finished: SUCCESS